Oct 31 20:55:24.231745 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 31 20:55:24.231770 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Oct 31 19:37:39 -00 2025 Oct 31 20:55:24.231778 kernel: KASLR enabled Oct 31 20:55:24.231785 kernel: efi: EFI v2.7 by EDK II Oct 31 20:55:24.231790 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 31 20:55:24.231796 kernel: random: crng init done Oct 31 20:55:24.231803 kernel: secureboot: Secure boot disabled Oct 31 20:55:24.231810 kernel: ACPI: Early table checksum verification disabled Oct 31 20:55:24.231817 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 31 20:55:24.231824 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 31 20:55:24.231830 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 20:55:24.231836 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 20:55:24.231842 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 20:55:24.231848 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 20:55:24.231856 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 20:55:24.231863 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 20:55:24.231869 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 20:55:24.231876 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 20:55:24.231882 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 20:55:24.231888 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 31 20:55:24.231894 kernel: ACPI: Use ACPI SPCR as default console: No Oct 31 20:55:24.231901 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 20:55:24.231908 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 31 20:55:24.231915 kernel: Zone ranges: Oct 31 20:55:24.231921 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 20:55:24.231928 kernel: DMA32 empty Oct 31 20:55:24.231934 kernel: Normal empty Oct 31 20:55:24.231940 kernel: Device empty Oct 31 20:55:24.231946 kernel: Movable zone start for each node Oct 31 20:55:24.231952 kernel: Early memory node ranges Oct 31 20:55:24.231959 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 31 20:55:24.231965 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 31 20:55:24.231971 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 31 20:55:24.231978 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 31 20:55:24.231985 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 31 20:55:24.231991 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 31 20:55:24.231998 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 31 20:55:24.232004 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 31 20:55:24.232010 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 31 20:55:24.232017 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 31 20:55:24.232027 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 31 20:55:24.232034 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 31 20:55:24.232041 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 31 20:55:24.232143 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 20:55:24.232153 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 31 20:55:24.232160 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 31 20:55:24.232166 kernel: psci: probing for conduit method from ACPI. Oct 31 20:55:24.232173 kernel: psci: PSCIv1.1 detected in firmware. Oct 31 20:55:24.232183 kernel: psci: Using standard PSCI v0.2 function IDs Oct 31 20:55:24.232190 kernel: psci: Trusted OS migration not required Oct 31 20:55:24.232197 kernel: psci: SMC Calling Convention v1.1 Oct 31 20:55:24.232204 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 31 20:55:24.232211 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 31 20:55:24.232218 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 31 20:55:24.232225 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 31 20:55:24.232232 kernel: Detected PIPT I-cache on CPU0 Oct 31 20:55:24.232238 kernel: CPU features: detected: GIC system register CPU interface Oct 31 20:55:24.232245 kernel: CPU features: detected: Spectre-v4 Oct 31 20:55:24.232252 kernel: CPU features: detected: Spectre-BHB Oct 31 20:55:24.232260 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 31 20:55:24.232267 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 31 20:55:24.232274 kernel: CPU features: detected: ARM erratum 1418040 Oct 31 20:55:24.232280 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 31 20:55:24.232287 kernel: alternatives: applying boot alternatives Oct 31 20:55:24.232295 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=efdc74f99e3f51ed6e04024d138ff4e894c899f263395f02b1e500da138da28d Oct 31 20:55:24.232302 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 20:55:24.232309 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 20:55:24.232316 kernel: Fallback order for Node 0: 0 Oct 31 20:55:24.232323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 31 20:55:24.232330 kernel: Policy zone: DMA Oct 31 20:55:24.232337 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 20:55:24.232344 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 31 20:55:24.232351 kernel: software IO TLB: area num 4. Oct 31 20:55:24.232357 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 31 20:55:24.232364 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 31 20:55:24.232371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 20:55:24.232378 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 20:55:24.232385 kernel: rcu: RCU event tracing is enabled. Oct 31 20:55:24.232392 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 20:55:24.232399 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 20:55:24.232407 kernel: Tracing variant of Tasks RCU enabled. Oct 31 20:55:24.232414 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 20:55:24.232421 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 20:55:24.232428 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 20:55:24.232435 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 20:55:24.232441 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 31 20:55:24.232448 kernel: GICv3: 256 SPIs implemented Oct 31 20:55:24.232455 kernel: GICv3: 0 Extended SPIs implemented Oct 31 20:55:24.232461 kernel: Root IRQ handler: gic_handle_irq Oct 31 20:55:24.232468 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 31 20:55:24.232475 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 31 20:55:24.232483 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 31 20:55:24.232489 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 31 20:55:24.232496 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 31 20:55:24.232503 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 31 20:55:24.232510 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 31 20:55:24.232517 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 31 20:55:24.232523 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 20:55:24.232531 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 20:55:24.232537 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 31 20:55:24.232544 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 31 20:55:24.232551 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 31 20:55:24.232559 kernel: arm-pv: using stolen time PV Oct 31 20:55:24.232566 kernel: Console: colour dummy device 80x25 Oct 31 20:55:24.232574 kernel: ACPI: Core revision 20240827 Oct 31 20:55:24.232581 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 31 20:55:24.232588 kernel: pid_max: default: 32768 minimum: 301 Oct 31 20:55:24.232595 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 31 20:55:24.232602 kernel: landlock: Up and running. Oct 31 20:55:24.232610 kernel: SELinux: Initializing. Oct 31 20:55:24.232618 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 20:55:24.232625 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 20:55:24.232632 kernel: rcu: Hierarchical SRCU implementation. Oct 31 20:55:24.232639 kernel: rcu: Max phase no-delay instances is 400. Oct 31 20:55:24.232646 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 31 20:55:24.232653 kernel: Remapping and enabling EFI services. Oct 31 20:55:24.232660 kernel: smp: Bringing up secondary CPUs ... Oct 31 20:55:24.232669 kernel: Detected PIPT I-cache on CPU1 Oct 31 20:55:24.232680 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 31 20:55:24.232689 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 31 20:55:24.232697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 20:55:24.232704 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 31 20:55:24.232711 kernel: Detected PIPT I-cache on CPU2 Oct 31 20:55:24.232719 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 31 20:55:24.232728 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 31 20:55:24.232735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 20:55:24.232743 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 31 20:55:24.232750 kernel: Detected PIPT I-cache on CPU3 Oct 31 20:55:24.232758 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 31 20:55:24.232765 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 31 20:55:24.232773 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 20:55:24.232781 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 31 20:55:24.232789 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 20:55:24.232796 kernel: SMP: Total of 4 processors activated. Oct 31 20:55:24.232804 kernel: CPU: All CPU(s) started at EL1 Oct 31 20:55:24.232811 kernel: CPU features: detected: 32-bit EL0 Support Oct 31 20:55:24.232819 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 31 20:55:24.232827 kernel: CPU features: detected: Common not Private translations Oct 31 20:55:24.232836 kernel: CPU features: detected: CRC32 instructions Oct 31 20:55:24.232843 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 31 20:55:24.232850 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 31 20:55:24.232858 kernel: CPU features: detected: LSE atomic instructions Oct 31 20:55:24.232866 kernel: CPU features: detected: Privileged Access Never Oct 31 20:55:24.232873 kernel: CPU features: detected: RAS Extension Support Oct 31 20:55:24.232881 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 31 20:55:24.232888 kernel: alternatives: applying system-wide alternatives Oct 31 20:55:24.232897 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 31 20:55:24.232904 kernel: Memory: 2451104K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12288K init, 1038K bss, 98848K reserved, 16384K cma-reserved) Oct 31 20:55:24.232912 kernel: devtmpfs: initialized Oct 31 20:55:24.232920 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 20:55:24.232927 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 20:55:24.232935 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 31 20:55:24.232942 kernel: 0 pages in range for non-PLT usage Oct 31 20:55:24.232951 kernel: 515232 pages in range for PLT usage Oct 31 20:55:24.232958 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 20:55:24.232965 kernel: SMBIOS 3.0.0 present. Oct 31 20:55:24.232973 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 31 20:55:24.232980 kernel: DMI: Memory slots populated: 1/1 Oct 31 20:55:24.232988 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 20:55:24.232995 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 31 20:55:24.233004 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 31 20:55:24.233012 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 31 20:55:24.233020 kernel: audit: initializing netlink subsys (disabled) Oct 31 20:55:24.233028 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Oct 31 20:55:24.233035 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 20:55:24.233042 kernel: cpuidle: using governor menu Oct 31 20:55:24.233050 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 31 20:55:24.233057 kernel: ASID allocator initialised with 32768 entries Oct 31 20:55:24.233066 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 20:55:24.233074 kernel: Serial: AMBA PL011 UART driver Oct 31 20:55:24.233081 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 20:55:24.233098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 20:55:24.233106 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 31 20:55:24.233114 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 31 20:55:24.233121 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 20:55:24.233130 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 20:55:24.233143 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 31 20:55:24.233152 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 31 20:55:24.233159 kernel: ACPI: Added _OSI(Module Device) Oct 31 20:55:24.233167 kernel: ACPI: Added _OSI(Processor Device) Oct 31 20:55:24.233174 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 20:55:24.233182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 20:55:24.233190 kernel: ACPI: Interpreter enabled Oct 31 20:55:24.233198 kernel: ACPI: Using GIC for interrupt routing Oct 31 20:55:24.233205 kernel: ACPI: MCFG table detected, 1 entries Oct 31 20:55:24.233213 kernel: ACPI: CPU0 has been hot-added Oct 31 20:55:24.233220 kernel: ACPI: CPU1 has been hot-added Oct 31 20:55:24.233227 kernel: ACPI: CPU2 has been hot-added Oct 31 20:55:24.233235 kernel: ACPI: CPU3 has been hot-added Oct 31 20:55:24.233243 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 31 20:55:24.233251 kernel: printk: legacy console [ttyAMA0] enabled Oct 31 20:55:24.233259 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 20:55:24.233428 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 20:55:24.233515 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 31 20:55:24.233594 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 31 20:55:24.233672 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 31 20:55:24.233770 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 31 20:55:24.233780 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 31 20:55:24.233788 kernel: PCI host bridge to bus 0000:00 Oct 31 20:55:24.233871 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 31 20:55:24.233945 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 31 20:55:24.234016 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 31 20:55:24.234104 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 20:55:24.234210 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 31 20:55:24.234301 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 31 20:55:24.234388 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 31 20:55:24.234467 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 31 20:55:24.234549 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 31 20:55:24.234627 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 31 20:55:24.234705 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 31 20:55:24.234784 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 31 20:55:24.234856 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 31 20:55:24.234926 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 31 20:55:24.234998 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 31 20:55:24.235007 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 31 20:55:24.235015 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 31 20:55:24.235023 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 31 20:55:24.235030 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 31 20:55:24.235038 kernel: iommu: Default domain type: Translated Oct 31 20:55:24.235047 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 31 20:55:24.235055 kernel: efivars: Registered efivars operations Oct 31 20:55:24.235062 kernel: vgaarb: loaded Oct 31 20:55:24.235069 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 31 20:55:24.235077 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 20:55:24.235084 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 20:55:24.235103 kernel: pnp: PnP ACPI init Oct 31 20:55:24.235207 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 31 20:55:24.235222 kernel: pnp: PnP ACPI: found 1 devices Oct 31 20:55:24.235229 kernel: NET: Registered PF_INET protocol family Oct 31 20:55:24.235237 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 20:55:24.235245 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 20:55:24.235253 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 20:55:24.235260 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 20:55:24.235269 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 31 20:55:24.235277 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 20:55:24.235289 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 20:55:24.235297 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 20:55:24.235309 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 20:55:24.235318 kernel: PCI: CLS 0 bytes, default 64 Oct 31 20:55:24.235328 kernel: kvm [1]: HYP mode not available Oct 31 20:55:24.235339 kernel: Initialise system trusted keyrings Oct 31 20:55:24.235347 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 20:55:24.235355 kernel: Key type asymmetric registered Oct 31 20:55:24.235362 kernel: Asymmetric key parser 'x509' registered Oct 31 20:55:24.235370 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 31 20:55:24.235377 kernel: io scheduler mq-deadline registered Oct 31 20:55:24.235385 kernel: io scheduler kyber registered Oct 31 20:55:24.235394 kernel: io scheduler bfq registered Oct 31 20:55:24.235401 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 31 20:55:24.235409 kernel: ACPI: button: Power Button [PWRB] Oct 31 20:55:24.235417 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 31 20:55:24.235504 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 31 20:55:24.235514 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 20:55:24.235522 kernel: thunder_xcv, ver 1.0 Oct 31 20:55:24.235530 kernel: thunder_bgx, ver 1.0 Oct 31 20:55:24.235538 kernel: nicpf, ver 1.0 Oct 31 20:55:24.235546 kernel: nicvf, ver 1.0 Oct 31 20:55:24.235637 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 31 20:55:24.235714 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-31T20:55:23 UTC (1761944123) Oct 31 20:55:24.235724 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 31 20:55:24.235732 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 31 20:55:24.235741 kernel: watchdog: NMI not fully supported Oct 31 20:55:24.235749 kernel: watchdog: Hard watchdog permanently disabled Oct 31 20:55:24.235756 kernel: NET: Registered PF_INET6 protocol family Oct 31 20:55:24.235764 kernel: Segment Routing with IPv6 Oct 31 20:55:24.235771 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 20:55:24.235779 kernel: NET: Registered PF_PACKET protocol family Oct 31 20:55:24.235786 kernel: Key type dns_resolver registered Oct 31 20:55:24.235795 kernel: registered taskstats version 1 Oct 31 20:55:24.235802 kernel: Loading compiled-in X.509 certificates Oct 31 20:55:24.235810 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 65a7304f30467433b5c106c4f21e1dd357e3006f' Oct 31 20:55:24.235818 kernel: Demotion targets for Node 0: null Oct 31 20:55:24.235825 kernel: Key type .fscrypt registered Oct 31 20:55:24.235833 kernel: Key type fscrypt-provisioning registered Oct 31 20:55:24.235840 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 20:55:24.235849 kernel: ima: Allocated hash algorithm: sha1 Oct 31 20:55:24.235857 kernel: ima: No architecture policies found Oct 31 20:55:24.235864 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 31 20:55:24.235872 kernel: clk: Disabling unused clocks Oct 31 20:55:24.235879 kernel: PM: genpd: Disabling unused power domains Oct 31 20:55:24.235887 kernel: Freeing unused kernel memory: 12288K Oct 31 20:55:24.235894 kernel: Run /init as init process Oct 31 20:55:24.235902 kernel: with arguments: Oct 31 20:55:24.235911 kernel: /init Oct 31 20:55:24.235919 kernel: with environment: Oct 31 20:55:24.235927 kernel: HOME=/ Oct 31 20:55:24.235934 kernel: TERM=linux Oct 31 20:55:24.236034 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 31 20:55:24.236125 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 31 20:55:24.236145 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 20:55:24.236154 kernel: GPT:16515071 != 27000831 Oct 31 20:55:24.236162 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 20:55:24.236169 kernel: GPT:16515071 != 27000831 Oct 31 20:55:24.236177 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 20:55:24.236184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 20:55:24.236191 kernel: SCSI subsystem initialized Oct 31 20:55:24.236202 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 20:55:24.236209 kernel: device-mapper: uevent: version 1.0.3 Oct 31 20:55:24.236217 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 31 20:55:24.236224 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 31 20:55:24.236232 kernel: raid6: neonx8 gen() 15742 MB/s Oct 31 20:55:24.236239 kernel: raid6: neonx4 gen() 15711 MB/s Oct 31 20:55:24.236247 kernel: raid6: neonx2 gen() 13283 MB/s Oct 31 20:55:24.236256 kernel: raid6: neonx1 gen() 10435 MB/s Oct 31 20:55:24.236263 kernel: raid6: int64x8 gen() 6821 MB/s Oct 31 20:55:24.236271 kernel: raid6: int64x4 gen() 7338 MB/s Oct 31 20:55:24.236278 kernel: raid6: int64x2 gen() 6105 MB/s Oct 31 20:55:24.236286 kernel: raid6: int64x1 gen() 5059 MB/s Oct 31 20:55:24.236293 kernel: raid6: using algorithm neonx8 gen() 15742 MB/s Oct 31 20:55:24.236300 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Oct 31 20:55:24.236309 kernel: raid6: using neon recovery algorithm Oct 31 20:55:24.236316 kernel: xor: measuring software checksum speed Oct 31 20:55:24.236323 kernel: 8regs : 21579 MB/sec Oct 31 20:55:24.236331 kernel: 32regs : 21653 MB/sec Oct 31 20:55:24.236338 kernel: arm64_neon : 22676 MB/sec Oct 31 20:55:24.236346 kernel: xor: using function: arm64_neon (22676 MB/sec) Oct 31 20:55:24.236353 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 20:55:24.236362 kernel: BTRFS: device fsid 445b3c55-316e-452c-a2c5-083d92675878 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (205) Oct 31 20:55:24.236370 kernel: BTRFS info (device dm-0): first mount of filesystem 445b3c55-316e-452c-a2c5-083d92675878 Oct 31 20:55:24.236378 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 31 20:55:24.236386 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 20:55:24.236393 kernel: BTRFS info (device dm-0): enabling free space tree Oct 31 20:55:24.236400 kernel: loop: module loaded Oct 31 20:55:24.236408 kernel: loop0: detected capacity change from 0 to 91464 Oct 31 20:55:24.236417 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 20:55:24.236425 systemd[1]: Successfully made /usr/ read-only. Oct 31 20:55:24.236436 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 20:55:24.236445 systemd[1]: Detected virtualization kvm. Oct 31 20:55:24.236453 systemd[1]: Detected architecture arm64. Oct 31 20:55:24.236460 systemd[1]: Running in initrd. Oct 31 20:55:24.236469 systemd[1]: No hostname configured, using default hostname. Oct 31 20:55:24.236478 systemd[1]: Hostname set to . Oct 31 20:55:24.236486 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 31 20:55:24.236494 systemd[1]: Queued start job for default target initrd.target. Oct 31 20:55:24.236502 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 20:55:24.236510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 20:55:24.236519 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 20:55:24.236528 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 20:55:24.236537 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 20:55:24.236545 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 20:55:24.236554 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 20:55:24.236562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 20:55:24.236571 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 20:55:24.236579 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 31 20:55:24.236588 systemd[1]: Reached target paths.target - Path Units. Oct 31 20:55:24.236595 systemd[1]: Reached target slices.target - Slice Units. Oct 31 20:55:24.236604 systemd[1]: Reached target swap.target - Swaps. Oct 31 20:55:24.236612 systemd[1]: Reached target timers.target - Timer Units. Oct 31 20:55:24.236620 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 20:55:24.236629 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 20:55:24.236637 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 20:55:24.236646 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 31 20:55:24.236660 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 20:55:24.236670 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 20:55:24.236679 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 20:55:24.236688 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 20:55:24.236697 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 20:55:24.236705 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 20:55:24.236714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 20:55:24.236722 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 20:55:24.236731 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 31 20:55:24.236740 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 20:55:24.236749 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 20:55:24.236757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 20:55:24.236765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 20:55:24.236775 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 20:55:24.236784 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 20:55:24.236792 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 20:55:24.236801 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 20:55:24.236827 systemd-journald[346]: Collecting audit messages is disabled. Oct 31 20:55:24.236849 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 20:55:24.236857 kernel: Bridge firewalling registered Oct 31 20:55:24.236866 systemd-journald[346]: Journal started Oct 31 20:55:24.236885 systemd-journald[346]: Runtime Journal (/run/log/journal/623280d9578f4f49abe3fdccd251c454) is 6M, max 48.5M, 42.4M free. Oct 31 20:55:24.235958 systemd-modules-load[349]: Inserted module 'br_netfilter' Oct 31 20:55:24.247620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 20:55:24.251419 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 20:55:24.252043 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 20:55:24.254868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 20:55:24.258959 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 20:55:24.260638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 20:55:24.262609 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 20:55:24.270685 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 20:55:24.280028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 20:55:24.283189 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 20:55:24.283622 systemd-tmpfiles[370]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 31 20:55:24.286268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 20:55:24.289747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 20:55:24.297247 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 20:55:24.299679 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 20:55:24.314677 dracut-cmdline[390]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=efdc74f99e3f51ed6e04024d138ff4e894c899f263395f02b1e500da138da28d Oct 31 20:55:24.335903 systemd-resolved[381]: Positive Trust Anchors: Oct 31 20:55:24.335924 systemd-resolved[381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 20:55:24.335927 systemd-resolved[381]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 20:55:24.335957 systemd-resolved[381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 20:55:24.356945 systemd-resolved[381]: Defaulting to hostname 'linux'. Oct 31 20:55:24.357832 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 20:55:24.359043 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 20:55:24.392113 kernel: Loading iSCSI transport class v2.0-870. Oct 31 20:55:24.401143 kernel: iscsi: registered transport (tcp) Oct 31 20:55:24.414145 kernel: iscsi: registered transport (qla4xxx) Oct 31 20:55:24.414183 kernel: QLogic iSCSI HBA Driver Oct 31 20:55:24.434251 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 20:55:24.458166 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 20:55:24.460873 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 20:55:24.504121 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 20:55:24.506349 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 20:55:24.507841 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 20:55:24.542161 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 20:55:24.544952 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 20:55:24.575263 systemd-udevd[627]: Using default interface naming scheme 'v257'. Oct 31 20:55:24.582987 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 20:55:24.586661 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 20:55:24.609804 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 20:55:24.612332 dracut-pre-trigger[702]: rd.md=0: removing MD RAID activation Oct 31 20:55:24.612815 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 20:55:24.633375 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 20:55:24.635707 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 20:55:24.658322 systemd-networkd[742]: lo: Link UP Oct 31 20:55:24.658330 systemd-networkd[742]: lo: Gained carrier Oct 31 20:55:24.659231 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 20:55:24.661072 systemd[1]: Reached target network.target - Network. Oct 31 20:55:24.693717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 20:55:24.698184 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 20:55:24.736151 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 20:55:24.749423 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 20:55:24.762010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 20:55:24.772600 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 20:55:24.774682 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 20:55:24.787336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 20:55:24.787451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 20:55:24.791189 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 20:55:24.793977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 20:55:24.796704 disk-uuid[802]: Primary Header is updated. Oct 31 20:55:24.796704 disk-uuid[802]: Secondary Entries is updated. Oct 31 20:55:24.796704 disk-uuid[802]: Secondary Header is updated. Oct 31 20:55:24.798977 systemd-networkd[742]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 20:55:24.798990 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 20:55:24.800184 systemd-networkd[742]: eth0: Link UP Oct 31 20:55:24.800566 systemd-networkd[742]: eth0: Gained carrier Oct 31 20:55:24.800576 systemd-networkd[742]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 20:55:24.816194 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 20:55:24.830262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 20:55:24.856365 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 20:55:24.858035 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 20:55:24.860430 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 20:55:24.862461 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 20:55:24.865385 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 20:55:24.898629 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 20:55:25.829264 disk-uuid[805]: Warning: The kernel is still using the old partition table. Oct 31 20:55:25.829264 disk-uuid[805]: The new table will be used at the next reboot or after you Oct 31 20:55:25.829264 disk-uuid[805]: run partprobe(8) or kpartx(8) Oct 31 20:55:25.829264 disk-uuid[805]: The operation has completed successfully. Oct 31 20:55:25.835184 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 20:55:25.835307 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 20:55:25.838191 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 20:55:25.879252 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (834) Oct 31 20:55:25.879305 kernel: BTRFS info (device vda6): first mount of filesystem 7194a0b1-0d4f-4d80-a661-ac44432e436a Oct 31 20:55:25.879316 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 20:55:25.883122 kernel: BTRFS info (device vda6): turning on async discard Oct 31 20:55:25.883172 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 20:55:25.889110 kernel: BTRFS info (device vda6): last unmount of filesystem 7194a0b1-0d4f-4d80-a661-ac44432e436a Oct 31 20:55:25.890201 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 20:55:25.891995 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 20:55:25.982564 ignition[853]: Ignition 2.22.0 Oct 31 20:55:25.982582 ignition[853]: Stage: fetch-offline Oct 31 20:55:25.982621 ignition[853]: no configs at "/usr/lib/ignition/base.d" Oct 31 20:55:25.982630 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 20:55:25.982777 ignition[853]: parsed url from cmdline: "" Oct 31 20:55:25.982780 ignition[853]: no config URL provided Oct 31 20:55:25.982784 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 20:55:25.982792 ignition[853]: no config at "/usr/lib/ignition/user.ign" Oct 31 20:55:25.982826 ignition[853]: op(1): [started] loading QEMU firmware config module Oct 31 20:55:25.982830 ignition[853]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 20:55:25.988742 ignition[853]: op(1): [finished] loading QEMU firmware config module Oct 31 20:55:26.031629 ignition[853]: parsing config with SHA512: 0cd6a3a512e255e1ad550c44d2ebdaf619e61751d6dea696e8254b934139b052d2cae3abec6a4c3ebc993b334573cd878ae06753595e8af6721a6ff7fab4b271 Oct 31 20:55:26.035454 unknown[853]: fetched base config from "system" Oct 31 20:55:26.035467 unknown[853]: fetched user config from "qemu" Oct 31 20:55:26.035820 ignition[853]: fetch-offline: fetch-offline passed Oct 31 20:55:26.035880 ignition[853]: Ignition finished successfully Oct 31 20:55:26.039037 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 20:55:26.040470 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 20:55:26.041253 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 20:55:26.074156 ignition[869]: Ignition 2.22.0 Oct 31 20:55:26.074170 ignition[869]: Stage: kargs Oct 31 20:55:26.074310 ignition[869]: no configs at "/usr/lib/ignition/base.d" Oct 31 20:55:26.074318 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 20:55:26.075112 ignition[869]: kargs: kargs passed Oct 31 20:55:26.075166 ignition[869]: Ignition finished successfully Oct 31 20:55:26.079182 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 20:55:26.081169 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 20:55:26.119848 ignition[877]: Ignition 2.22.0 Oct 31 20:55:26.119868 ignition[877]: Stage: disks Oct 31 20:55:26.120008 ignition[877]: no configs at "/usr/lib/ignition/base.d" Oct 31 20:55:26.123154 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 20:55:26.120016 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 20:55:26.124309 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 20:55:26.120960 ignition[877]: disks: disks passed Oct 31 20:55:26.126133 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 20:55:26.121006 ignition[877]: Ignition finished successfully Oct 31 20:55:26.128346 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 20:55:26.130239 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 20:55:26.131763 systemd[1]: Reached target basic.target - Basic System. Oct 31 20:55:26.134778 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 20:55:26.175882 systemd-fsck[887]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 31 20:55:26.302133 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 20:55:26.304515 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 20:55:26.375996 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 20:55:26.377648 kernel: EXT4-fs (vda9): mounted filesystem 7c037fb7-1ec4-4e36-ab60-34d241ee33bc r/w with ordered data mode. Quota mode: none. Oct 31 20:55:26.377370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 20:55:26.380697 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 20:55:26.383025 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 20:55:26.384154 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 20:55:26.384187 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 20:55:26.384212 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 20:55:26.396757 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 20:55:26.399056 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 20:55:26.403946 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (895) Oct 31 20:55:26.403989 kernel: BTRFS info (device vda6): first mount of filesystem 7194a0b1-0d4f-4d80-a661-ac44432e436a Oct 31 20:55:26.405083 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 20:55:26.408151 kernel: BTRFS info (device vda6): turning on async discard Oct 31 20:55:26.408200 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 20:55:26.409213 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 20:55:26.439780 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 20:55:26.444030 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Oct 31 20:55:26.448011 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 20:55:26.451507 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 20:55:26.518972 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 20:55:26.521440 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 20:55:26.523115 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 20:55:26.541471 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 20:55:26.543153 kernel: BTRFS info (device vda6): last unmount of filesystem 7194a0b1-0d4f-4d80-a661-ac44432e436a Oct 31 20:55:26.544233 systemd-networkd[742]: eth0: Gained IPv6LL Oct 31 20:55:26.561248 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 20:55:26.576192 ignition[1009]: INFO : Ignition 2.22.0 Oct 31 20:55:26.576192 ignition[1009]: INFO : Stage: mount Oct 31 20:55:26.577688 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 20:55:26.577688 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 20:55:26.577688 ignition[1009]: INFO : mount: mount passed Oct 31 20:55:26.577688 ignition[1009]: INFO : Ignition finished successfully Oct 31 20:55:26.579443 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 20:55:26.581561 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 20:55:27.377544 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 20:55:27.407127 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1021) Oct 31 20:55:27.409849 kernel: BTRFS info (device vda6): first mount of filesystem 7194a0b1-0d4f-4d80-a661-ac44432e436a Oct 31 20:55:27.409866 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 20:55:27.412842 kernel: BTRFS info (device vda6): turning on async discard Oct 31 20:55:27.412860 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 20:55:27.414255 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 20:55:27.449478 ignition[1039]: INFO : Ignition 2.22.0 Oct 31 20:55:27.449478 ignition[1039]: INFO : Stage: files Oct 31 20:55:27.451281 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 20:55:27.451281 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 20:55:27.451281 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping Oct 31 20:55:27.454685 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 20:55:27.454685 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 20:55:27.454685 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 20:55:27.454685 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 20:55:27.454685 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 20:55:27.454444 unknown[1039]: wrote ssh authorized keys file for user: core Oct 31 20:55:27.463581 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 31 20:55:27.463581 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 31 20:55:27.530377 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 20:55:27.763072 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 31 20:55:27.765211 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 20:55:27.765211 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 20:55:27.765211 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 20:55:27.765211 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 20:55:27.765211 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 20:55:27.765211 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 20:55:27.765211 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 20:55:27.765211 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 20:55:27.780297 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 20:55:27.780297 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 20:55:27.780297 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 20:55:27.780297 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 20:55:27.780297 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 20:55:27.780297 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 31 20:55:28.199483 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 20:55:28.887165 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 20:55:28.887165 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 20:55:28.891303 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 20:55:28.891303 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 20:55:28.891303 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 20:55:28.891303 ignition[1039]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 31 20:55:28.891303 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 20:55:28.891303 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 20:55:28.891303 ignition[1039]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 31 20:55:28.891303 ignition[1039]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 20:55:28.906230 ignition[1039]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 20:55:28.908150 ignition[1039]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 20:55:28.909822 ignition[1039]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 20:55:28.909822 ignition[1039]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 31 20:55:28.909822 ignition[1039]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 20:55:28.909822 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 20:55:28.909822 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 20:55:28.909822 ignition[1039]: INFO : files: files passed Oct 31 20:55:28.909822 ignition[1039]: INFO : Ignition finished successfully Oct 31 20:55:28.910269 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 20:55:28.914240 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 20:55:28.918229 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 20:55:28.930290 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 20:55:28.930387 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 20:55:28.934749 initrd-setup-root-after-ignition[1069]: grep: /sysroot/oem/oem-release: No such file or directory Oct 31 20:55:28.938043 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 20:55:28.938043 initrd-setup-root-after-ignition[1071]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 20:55:28.941778 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 20:55:28.941660 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 20:55:28.945016 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 20:55:28.947904 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 20:55:28.977748 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 20:55:28.977886 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 20:55:28.980313 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 20:55:28.982376 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 20:55:28.984459 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 20:55:28.985315 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 20:55:29.017139 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 20:55:29.019668 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 20:55:29.052110 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 20:55:29.052310 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 20:55:29.054520 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 20:55:29.056756 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 20:55:29.058641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 20:55:29.058770 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 20:55:29.061361 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 20:55:29.063448 systemd[1]: Stopped target basic.target - Basic System. Oct 31 20:55:29.065141 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 20:55:29.066977 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 20:55:29.069110 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 20:55:29.071259 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 31 20:55:29.073327 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 20:55:29.075269 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 20:55:29.077342 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 20:55:29.079349 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 20:55:29.081273 systemd[1]: Stopped target swap.target - Swaps. Oct 31 20:55:29.082926 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 20:55:29.083057 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 20:55:29.085567 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 20:55:29.087600 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 20:55:29.089720 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 20:55:29.093195 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 20:55:29.094487 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 20:55:29.094611 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 20:55:29.097732 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 20:55:29.097854 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 20:55:29.100019 systemd[1]: Stopped target paths.target - Path Units. Oct 31 20:55:29.101732 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 20:55:29.101840 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 20:55:29.104004 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 20:55:29.105770 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 20:55:29.107514 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 20:55:29.107600 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 20:55:29.109835 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 20:55:29.109917 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 20:55:29.111604 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 20:55:29.111717 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 20:55:29.113589 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 20:55:29.113697 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 20:55:29.116071 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 20:55:29.118632 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 20:55:29.119627 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 20:55:29.119763 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 20:55:29.121978 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 20:55:29.122079 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 20:55:29.124291 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 20:55:29.124390 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 20:55:29.129918 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 20:55:29.137150 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 20:55:29.146458 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 20:55:29.151630 ignition[1097]: INFO : Ignition 2.22.0 Oct 31 20:55:29.151630 ignition[1097]: INFO : Stage: umount Oct 31 20:55:29.153473 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 20:55:29.153473 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 20:55:29.153473 ignition[1097]: INFO : umount: umount passed Oct 31 20:55:29.153473 ignition[1097]: INFO : Ignition finished successfully Oct 31 20:55:29.155124 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 20:55:29.155224 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 20:55:29.156835 systemd[1]: Stopped target network.target - Network. Oct 31 20:55:29.158289 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 20:55:29.158349 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 20:55:29.160293 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 20:55:29.160345 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 20:55:29.162177 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 20:55:29.162228 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 20:55:29.164295 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 20:55:29.164342 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 20:55:29.166234 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 20:55:29.168214 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 20:55:29.174043 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 20:55:29.174188 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 20:55:29.184655 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 20:55:29.184789 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 20:55:29.190451 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 20:55:29.190548 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 20:55:29.192771 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 31 20:55:29.194147 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 20:55:29.194186 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 20:55:29.196315 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 20:55:29.196371 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 20:55:29.198924 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 20:55:29.200048 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 20:55:29.200138 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 20:55:29.202304 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 20:55:29.202348 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 20:55:29.204181 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 20:55:29.204225 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 20:55:29.206252 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 20:55:29.219346 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 20:55:29.219474 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 20:55:29.221817 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 20:55:29.221856 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 20:55:29.223677 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 20:55:29.223708 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 20:55:29.225591 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 20:55:29.225639 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 20:55:29.228507 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 20:55:29.228562 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 20:55:29.231417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 20:55:29.231468 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 20:55:29.235197 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 20:55:29.236366 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 31 20:55:29.236432 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 20:55:29.238450 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 20:55:29.238501 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 20:55:29.240794 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 20:55:29.240843 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 20:55:29.243057 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 20:55:29.243123 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 20:55:29.245252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 20:55:29.245302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 20:55:29.248158 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 20:55:29.254253 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 20:55:29.259635 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 20:55:29.259728 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 20:55:29.262118 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 20:55:29.264774 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 20:55:29.290228 systemd[1]: Switching root. Oct 31 20:55:29.334345 systemd-journald[346]: Journal stopped Oct 31 20:55:30.127927 systemd-journald[346]: Received SIGTERM from PID 1 (systemd). Oct 31 20:55:30.127980 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 20:55:30.127993 kernel: SELinux: policy capability open_perms=1 Oct 31 20:55:30.128006 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 20:55:30.128018 kernel: SELinux: policy capability always_check_network=0 Oct 31 20:55:30.128029 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 20:55:30.128039 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 20:55:30.128048 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 20:55:30.128062 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 20:55:30.128072 kernel: SELinux: policy capability userspace_initial_context=0 Oct 31 20:55:30.128105 kernel: audit: type=1403 audit(1761944129.543:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 20:55:30.128128 systemd[1]: Successfully loaded SELinux policy in 56.100ms. Oct 31 20:55:30.128149 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.391ms. Oct 31 20:55:30.128161 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 20:55:30.128172 systemd[1]: Detected virtualization kvm. Oct 31 20:55:30.128184 systemd[1]: Detected architecture arm64. Oct 31 20:55:30.128194 systemd[1]: Detected first boot. Oct 31 20:55:30.128207 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 31 20:55:30.128218 zram_generator::config[1143]: No configuration found. Oct 31 20:55:30.128229 kernel: NET: Registered PF_VSOCK protocol family Oct 31 20:55:30.128239 systemd[1]: Populated /etc with preset unit settings. Oct 31 20:55:30.128252 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 20:55:30.128264 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 20:55:30.128276 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 20:55:30.128288 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 20:55:30.128298 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 20:55:30.128309 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 20:55:30.128320 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 20:55:30.128330 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 20:55:30.128341 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 20:55:30.128354 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 20:55:30.128365 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 20:55:30.128376 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 20:55:30.128387 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 20:55:30.128397 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 20:55:30.128408 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 20:55:30.128419 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 20:55:30.128431 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 20:55:30.128442 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 31 20:55:30.128453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 20:55:30.128464 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 20:55:30.128478 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 20:55:30.128488 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 20:55:30.128500 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 20:55:30.128511 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 20:55:30.128522 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 20:55:30.128533 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 20:55:30.128544 systemd[1]: Reached target slices.target - Slice Units. Oct 31 20:55:30.128554 systemd[1]: Reached target swap.target - Swaps. Oct 31 20:55:30.128565 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 20:55:30.128575 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 20:55:30.128588 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 31 20:55:30.128599 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 20:55:30.128610 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 20:55:30.128621 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 20:55:30.128636 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 20:55:30.128649 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 20:55:30.128663 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 20:55:30.128675 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 20:55:30.128686 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 20:55:30.128696 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 20:55:30.128707 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 20:55:30.128719 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 20:55:30.128730 systemd[1]: Reached target machines.target - Containers. Oct 31 20:55:30.128741 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 20:55:30.128753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 20:55:30.128764 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 20:55:30.128774 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 20:55:30.128785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 20:55:30.128796 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 20:55:30.128807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 20:55:30.128819 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 20:55:30.128830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 20:55:30.128841 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 20:55:30.128853 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 20:55:30.128864 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 20:55:30.128876 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 20:55:30.128886 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 20:55:30.128898 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 20:55:30.128909 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 20:55:30.128920 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 20:55:30.128931 kernel: fuse: init (API version 7.41) Oct 31 20:55:30.128941 kernel: ACPI: bus type drm_connector registered Oct 31 20:55:30.128952 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 20:55:30.128962 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 20:55:30.128975 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 31 20:55:30.128985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 20:55:30.128996 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 20:55:30.129007 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 20:55:30.129019 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 20:55:30.129030 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 20:55:30.129040 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 20:55:30.129051 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 20:55:30.129081 systemd-journald[1218]: Collecting audit messages is disabled. Oct 31 20:55:30.129133 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 20:55:30.129149 systemd-journald[1218]: Journal started Oct 31 20:55:30.129170 systemd-journald[1218]: Runtime Journal (/run/log/journal/623280d9578f4f49abe3fdccd251c454) is 6M, max 48.5M, 42.4M free. Oct 31 20:55:29.897487 systemd[1]: Queued start job for default target multi-user.target. Oct 31 20:55:29.916107 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 20:55:29.916547 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 20:55:30.131344 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 20:55:30.132482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 20:55:30.134233 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 20:55:30.135156 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 20:55:30.136686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 20:55:30.136869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 20:55:30.138416 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 20:55:30.138586 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 20:55:30.140136 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 20:55:30.142129 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 20:55:30.143814 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 20:55:30.143974 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 20:55:30.145392 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 20:55:30.145542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 20:55:30.148144 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 20:55:30.149666 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 20:55:30.152005 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 20:55:30.153635 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 31 20:55:30.161192 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 20:55:30.169691 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 20:55:30.171273 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 31 20:55:30.173992 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 20:55:30.176478 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 20:55:30.177835 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 20:55:30.177878 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 20:55:30.179934 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 31 20:55:30.181796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 20:55:30.182976 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 20:55:30.185202 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 20:55:30.186501 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 20:55:30.187497 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 20:55:30.188765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 20:55:30.192252 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 20:55:30.193522 systemd-journald[1218]: Time spent on flushing to /var/log/journal/623280d9578f4f49abe3fdccd251c454 is 18.108ms for 871 entries. Oct 31 20:55:30.193522 systemd-journald[1218]: System Journal (/var/log/journal/623280d9578f4f49abe3fdccd251c454) is 8M, max 163.5M, 155.5M free. Oct 31 20:55:30.221202 systemd-journald[1218]: Received client request to flush runtime journal. Oct 31 20:55:30.221244 kernel: loop1: detected capacity change from 0 to 207008 Oct 31 20:55:30.195073 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 20:55:30.198386 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 20:55:30.201354 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 20:55:30.203066 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 20:55:30.209302 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 20:55:30.210853 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 20:55:30.215325 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 31 20:55:30.228328 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 20:55:30.230478 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Oct 31 20:55:30.230497 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Oct 31 20:55:30.232264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 20:55:30.234246 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 20:55:30.240170 kernel: loop2: detected capacity change from 0 to 100192 Oct 31 20:55:30.238970 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 20:55:30.256305 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 31 20:55:30.264128 kernel: loop3: detected capacity change from 0 to 109736 Oct 31 20:55:30.277251 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 20:55:30.280080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 20:55:30.282085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 20:55:30.289115 kernel: loop4: detected capacity change from 0 to 207008 Oct 31 20:55:30.297939 kernel: loop5: detected capacity change from 0 to 100192 Oct 31 20:55:30.297262 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 20:55:30.301113 kernel: loop6: detected capacity change from 0 to 109736 Oct 31 20:55:30.304625 (sd-merge)[1284]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 31 20:55:30.307396 (sd-merge)[1284]: Merged extensions into '/usr'. Oct 31 20:55:30.309552 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Oct 31 20:55:30.309854 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Oct 31 20:55:30.313888 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 20:55:30.316345 systemd[1]: Reload requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 20:55:30.316367 systemd[1]: Reloading... Oct 31 20:55:30.374556 zram_generator::config[1319]: No configuration found. Oct 31 20:55:30.416604 systemd-resolved[1282]: Positive Trust Anchors: Oct 31 20:55:30.416625 systemd-resolved[1282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 20:55:30.416629 systemd-resolved[1282]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 20:55:30.416664 systemd-resolved[1282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 20:55:30.423832 systemd-resolved[1282]: Defaulting to hostname 'linux'. Oct 31 20:55:30.519601 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 20:55:30.519826 systemd[1]: Reloading finished in 203 ms. Oct 31 20:55:30.536682 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 20:55:30.538186 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 20:55:30.539644 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 20:55:30.542883 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 20:55:30.554330 systemd[1]: Starting ensure-sysext.service... Oct 31 20:55:30.556224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 20:55:30.565205 systemd[1]: Reload requested from client PID 1353 ('systemctl') (unit ensure-sysext.service)... Oct 31 20:55:30.565221 systemd[1]: Reloading... Oct 31 20:55:30.569059 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 31 20:55:30.569131 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 31 20:55:30.569328 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 20:55:30.569477 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 20:55:30.570029 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 20:55:30.570234 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Oct 31 20:55:30.570283 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Oct 31 20:55:30.573723 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 20:55:30.573738 systemd-tmpfiles[1354]: Skipping /boot Oct 31 20:55:30.579883 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 20:55:30.579902 systemd-tmpfiles[1354]: Skipping /boot Oct 31 20:55:30.615129 zram_generator::config[1384]: No configuration found. Oct 31 20:55:30.741470 systemd[1]: Reloading finished in 175 ms. Oct 31 20:55:30.760802 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 20:55:30.785684 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 20:55:30.794602 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 20:55:30.796746 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 20:55:30.823020 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 20:55:30.827330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 20:55:30.830558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 20:55:30.833312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 20:55:30.839482 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 20:55:30.840545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 20:55:30.843438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 20:55:30.849597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 20:55:30.851134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 20:55:30.851252 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 20:55:30.855635 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 20:55:30.857971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 20:55:30.858156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 20:55:30.861679 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 20:55:30.861815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 20:55:30.869558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 20:55:30.869757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 20:55:30.875418 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 20:55:30.878832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 20:55:30.881448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 20:55:30.884583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 20:55:30.886301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 20:55:30.886429 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 20:55:30.886524 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 20:55:30.886951 augenrules[1455]: No rules Oct 31 20:55:30.887443 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 20:55:30.889370 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 20:55:30.890186 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 20:55:30.891869 systemd-udevd[1428]: Using default interface naming scheme 'v257'. Oct 31 20:55:30.899313 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 20:55:30.900727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 20:55:30.905312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 20:55:30.908305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 20:55:30.909657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 20:55:30.909778 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 20:55:30.909886 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 20:55:30.910876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 20:55:30.911286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 20:55:30.913660 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 20:55:30.914853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 20:55:30.916480 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 20:55:30.928400 systemd[1]: Finished ensure-sysext.service. Oct 31 20:55:30.932384 augenrules[1463]: /sbin/augenrules: No change Oct 31 20:55:30.938563 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 20:55:30.940255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 20:55:30.942737 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 20:55:30.944398 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 20:55:30.946347 augenrules[1506]: No rules Oct 31 20:55:30.947544 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 20:55:30.949712 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 20:55:30.950142 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 20:55:30.953607 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 20:55:30.953777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 20:55:30.958064 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 20:55:31.024404 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 31 20:55:31.039533 systemd-networkd[1504]: lo: Link UP Oct 31 20:55:31.039541 systemd-networkd[1504]: lo: Gained carrier Oct 31 20:55:31.039688 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 20:55:31.040802 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 20:55:31.040814 systemd-networkd[1504]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 20:55:31.041428 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 20:55:31.042052 systemd-networkd[1504]: eth0: Link UP Oct 31 20:55:31.042262 systemd-networkd[1504]: eth0: Gained carrier Oct 31 20:55:31.042281 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 20:55:31.045529 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 20:55:31.047785 systemd[1]: Reached target network.target - Network. Oct 31 20:55:31.049125 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 20:55:31.051771 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 20:55:31.057238 systemd-networkd[1504]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 20:55:31.057513 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 31 20:55:31.058467 systemd-timesyncd[1505]: Network configuration changed, trying to establish connection. Oct 31 20:55:31.059526 systemd-timesyncd[1505]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 20:55:31.059589 systemd-timesyncd[1505]: Initial clock synchronization to Fri 2025-10-31 20:55:31.200840 UTC. Oct 31 20:55:31.060289 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 20:55:31.080577 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 20:55:31.083119 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 31 20:55:31.147154 ldconfig[1422]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 20:55:31.151530 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 20:55:31.155254 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 20:55:31.171321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 20:55:31.174330 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 20:55:31.205818 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 20:55:31.209317 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 20:55:31.210513 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 20:55:31.211799 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 20:55:31.213267 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 20:55:31.214424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 20:55:31.215700 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 20:55:31.217019 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 20:55:31.217059 systemd[1]: Reached target paths.target - Path Units. Oct 31 20:55:31.218076 systemd[1]: Reached target timers.target - Timer Units. Oct 31 20:55:31.219792 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 20:55:31.222155 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 20:55:31.224772 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 31 20:55:31.226276 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 31 20:55:31.227563 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 31 20:55:31.234823 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 20:55:31.236202 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 31 20:55:31.237881 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 20:55:31.239112 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 20:55:31.240042 systemd[1]: Reached target basic.target - Basic System. Oct 31 20:55:31.241083 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 20:55:31.241134 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 20:55:31.241997 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 20:55:31.244005 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 20:55:31.245864 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 20:55:31.247909 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 20:55:31.250059 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 20:55:31.251111 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 20:55:31.252518 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 20:55:31.255044 jq[1562]: false Oct 31 20:55:31.255229 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 20:55:31.257185 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 20:55:31.259938 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 20:55:31.263372 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 20:55:31.264436 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 20:55:31.264810 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 20:55:31.265366 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 20:55:31.265836 extend-filesystems[1563]: Found /dev/vda6 Oct 31 20:55:31.267374 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 20:55:31.271171 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 20:55:31.272480 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 20:55:31.273026 extend-filesystems[1563]: Found /dev/vda9 Oct 31 20:55:31.274263 extend-filesystems[1563]: Checking size of /dev/vda9 Oct 31 20:55:31.275217 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 20:55:31.276709 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 20:55:31.276871 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 20:55:31.277558 jq[1576]: true Oct 31 20:55:31.285444 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 20:55:31.286300 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 20:55:31.291117 jq[1590]: true Oct 31 20:55:31.293940 extend-filesystems[1563]: Resized partition /dev/vda9 Oct 31 20:55:31.301077 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Oct 31 20:55:31.304171 tar[1584]: linux-arm64/LICENSE Oct 31 20:55:31.304171 tar[1584]: linux-arm64/helm Oct 31 20:55:31.311657 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 31 20:55:31.323060 update_engine[1574]: I20251031 20:55:31.322857 1574 main.cc:92] Flatcar Update Engine starting Oct 31 20:55:31.331034 dbus-daemon[1560]: [system] SELinux support is enabled Oct 31 20:55:31.332325 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 20:55:31.337003 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 20:55:31.337035 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 20:55:31.340861 update_engine[1574]: I20251031 20:55:31.340790 1574 update_check_scheduler.cc:74] Next update check in 5m35s Oct 31 20:55:31.341429 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 20:55:31.341456 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 20:55:31.343689 systemd[1]: Started update-engine.service - Update Engine. Oct 31 20:55:31.347076 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 20:55:31.353911 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 31 20:55:31.370234 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 20:55:31.370234 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 20:55:31.370234 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 31 20:55:31.375166 extend-filesystems[1563]: Resized filesystem in /dev/vda9 Oct 31 20:55:31.372254 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 20:55:31.372443 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 20:55:31.377432 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (Power Button) Oct 31 20:55:31.377638 systemd-logind[1572]: New seat seat0. Oct 31 20:55:31.379446 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 20:55:31.385359 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Oct 31 20:55:31.386902 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 20:55:31.389418 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 20:55:31.412321 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 20:55:31.437894 containerd[1586]: time="2025-10-31T20:55:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 31 20:55:31.438727 containerd[1586]: time="2025-10-31T20:55:31.438694480Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Oct 31 20:55:31.456513 containerd[1586]: time="2025-10-31T20:55:31.456404400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.64µs" Oct 31 20:55:31.456513 containerd[1586]: time="2025-10-31T20:55:31.456491280Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 31 20:55:31.456626 containerd[1586]: time="2025-10-31T20:55:31.456530480Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 31 20:55:31.456626 containerd[1586]: time="2025-10-31T20:55:31.456543000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 31 20:55:31.456766 containerd[1586]: time="2025-10-31T20:55:31.456730520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 31 20:55:31.456766 containerd[1586]: time="2025-10-31T20:55:31.456758120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 20:55:31.456888 containerd[1586]: time="2025-10-31T20:55:31.456853560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 20:55:31.456888 containerd[1586]: time="2025-10-31T20:55:31.456874000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457286 containerd[1586]: time="2025-10-31T20:55:31.457247120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457286 containerd[1586]: time="2025-10-31T20:55:31.457274840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457327 containerd[1586]: time="2025-10-31T20:55:31.457287360Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457327 containerd[1586]: time="2025-10-31T20:55:31.457295400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457542 containerd[1586]: time="2025-10-31T20:55:31.457507520Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457542 containerd[1586]: time="2025-10-31T20:55:31.457530400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457683 containerd[1586]: time="2025-10-31T20:55:31.457657680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457921 containerd[1586]: time="2025-10-31T20:55:31.457888480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457946 containerd[1586]: time="2025-10-31T20:55:31.457926840Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 20:55:31.457997 containerd[1586]: time="2025-10-31T20:55:31.457937200Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 31 20:55:31.458033 containerd[1586]: time="2025-10-31T20:55:31.458018240Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 31 20:55:31.458294 containerd[1586]: time="2025-10-31T20:55:31.458267880Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 31 20:55:31.458381 containerd[1586]: time="2025-10-31T20:55:31.458365760Z" level=info msg="metadata content store policy set" policy=shared Oct 31 20:55:31.461683 containerd[1586]: time="2025-10-31T20:55:31.461648960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 31 20:55:31.461726 containerd[1586]: time="2025-10-31T20:55:31.461695320Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Oct 31 20:55:31.461798 containerd[1586]: time="2025-10-31T20:55:31.461778840Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Oct 31 20:55:31.461798 containerd[1586]: time="2025-10-31T20:55:31.461795640Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 31 20:55:31.461881 containerd[1586]: time="2025-10-31T20:55:31.461809200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 31 20:55:31.461881 containerd[1586]: time="2025-10-31T20:55:31.461820880Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 31 20:55:31.461881 containerd[1586]: time="2025-10-31T20:55:31.461841360Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 31 20:55:31.461881 containerd[1586]: time="2025-10-31T20:55:31.461851160Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 31 20:55:31.461881 containerd[1586]: time="2025-10-31T20:55:31.461864360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 31 20:55:31.461881 containerd[1586]: time="2025-10-31T20:55:31.461876440Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 31 20:55:31.461974 containerd[1586]: time="2025-10-31T20:55:31.461887600Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 31 20:55:31.461974 containerd[1586]: time="2025-10-31T20:55:31.461898760Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 31 20:55:31.461974 containerd[1586]: time="2025-10-31T20:55:31.461908120Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 31 20:55:31.461974 containerd[1586]: time="2025-10-31T20:55:31.461919560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 31 20:55:31.462046 containerd[1586]: time="2025-10-31T20:55:31.462026560Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 31 20:55:31.462068 containerd[1586]: time="2025-10-31T20:55:31.462053640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 31 20:55:31.462100 containerd[1586]: time="2025-10-31T20:55:31.462068720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 31 20:55:31.462100 containerd[1586]: time="2025-10-31T20:55:31.462080440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 31 20:55:31.463147 containerd[1586]: time="2025-10-31T20:55:31.463119000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 31 20:55:31.463147 containerd[1586]: time="2025-10-31T20:55:31.463142680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 31 20:55:31.463245 containerd[1586]: time="2025-10-31T20:55:31.463166960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 31 20:55:31.463245 containerd[1586]: time="2025-10-31T20:55:31.463177480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 31 20:55:31.463245 containerd[1586]: time="2025-10-31T20:55:31.463188080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 31 20:55:31.463245 containerd[1586]: time="2025-10-31T20:55:31.463198480Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 31 20:55:31.463245 containerd[1586]: time="2025-10-31T20:55:31.463208440Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 31 20:55:31.463245 containerd[1586]: time="2025-10-31T20:55:31.463234000Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 31 20:55:31.463372 containerd[1586]: time="2025-10-31T20:55:31.463269280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 31 20:55:31.463372 containerd[1586]: time="2025-10-31T20:55:31.463283680Z" level=info msg="Start snapshots syncer" Oct 31 20:55:31.463372 containerd[1586]: time="2025-10-31T20:55:31.463313680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 31 20:55:31.463541 containerd[1586]: time="2025-10-31T20:55:31.463507680Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 31 20:55:31.463632 containerd[1586]: time="2025-10-31T20:55:31.463555840Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 31 20:55:31.463632 containerd[1586]: time="2025-10-31T20:55:31.463615640Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 31 20:55:31.463774 containerd[1586]: time="2025-10-31T20:55:31.463716400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 31 20:55:31.463774 containerd[1586]: time="2025-10-31T20:55:31.463748840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 31 20:55:31.463774 containerd[1586]: time="2025-10-31T20:55:31.463761040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 31 20:55:31.463774 containerd[1586]: time="2025-10-31T20:55:31.463771600Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463782720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463792520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463802920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463812600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463822320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463845480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463857520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463866360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463875000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463882880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463895240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463905640Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463917920Z" level=info msg="runtime interface created" Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463923240Z" level=info msg="created NRI interface" Oct 31 20:55:31.464000 containerd[1586]: time="2025-10-31T20:55:31.463931120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 31 20:55:31.464279 containerd[1586]: time="2025-10-31T20:55:31.463941240Z" level=info msg="Connect containerd service" Oct 31 20:55:31.464279 containerd[1586]: time="2025-10-31T20:55:31.463961160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 20:55:31.468376 containerd[1586]: time="2025-10-31T20:55:31.468323880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 20:55:31.535194 containerd[1586]: time="2025-10-31T20:55:31.535145080Z" level=info msg="Start subscribing containerd event" Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535704120Z" level=info msg="Start recovering state" Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535721640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535786440Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535814280Z" level=info msg="Start event monitor" Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535833880Z" level=info msg="Start cni network conf syncer for default" Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535842440Z" level=info msg="Start streaming server" Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535858160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535868600Z" level=info msg="runtime interface starting up..." Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535874800Z" level=info msg="starting plugins..." Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.535888200Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 31 20:55:31.536482 containerd[1586]: time="2025-10-31T20:55:31.536020440Z" level=info msg="containerd successfully booted in 0.098456s" Oct 31 20:55:31.536188 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 20:55:31.619120 tar[1584]: linux-arm64/README.md Oct 31 20:55:31.641167 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 20:55:32.069687 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 20:55:32.088619 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 20:55:32.091569 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 20:55:32.116538 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 20:55:32.116737 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 20:55:32.120537 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 20:55:32.145991 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 20:55:32.148853 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 20:55:32.151046 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 31 20:55:32.152449 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 20:55:33.072567 systemd-networkd[1504]: eth0: Gained IPv6LL Oct 31 20:55:33.074884 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 20:55:33.077776 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 20:55:33.080359 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 31 20:55:33.082802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 20:55:33.093522 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 20:55:33.108185 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 20:55:33.108379 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 31 20:55:33.109994 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 20:55:33.111921 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 20:55:33.645153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 20:55:33.646743 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 20:55:33.649342 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 20:55:33.649867 systemd[1]: Startup finished in 1.409s (kernel) + 5.531s (initrd) + 4.163s (userspace) = 11.104s. Oct 31 20:55:33.988281 kubelet[1700]: E1031 20:55:33.988174 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 20:55:33.990453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 20:55:33.990586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 20:55:33.991078 systemd[1]: kubelet.service: Consumed 739ms CPU time, 255.5M memory peak. Oct 31 20:55:35.261471 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 20:55:35.262595 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:32918.service - OpenSSH per-connection server daemon (10.0.0.1:32918). Oct 31 20:55:35.343627 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 32918 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:55:35.345314 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:55:35.351187 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 20:55:35.352081 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 20:55:35.356880 systemd-logind[1572]: New session 1 of user core. Oct 31 20:55:35.370127 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 20:55:35.372358 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 20:55:35.391020 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 20:55:35.393122 systemd-logind[1572]: New session c1 of user core. Oct 31 20:55:35.494572 systemd[1719]: Queued start job for default target default.target. Oct 31 20:55:35.518032 systemd[1719]: Created slice app.slice - User Application Slice. Oct 31 20:55:35.518062 systemd[1719]: Reached target paths.target - Paths. Oct 31 20:55:35.518119 systemd[1719]: Reached target timers.target - Timers. Oct 31 20:55:35.519313 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 20:55:35.528622 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 20:55:35.528683 systemd[1719]: Reached target sockets.target - Sockets. Oct 31 20:55:35.528718 systemd[1719]: Reached target basic.target - Basic System. Oct 31 20:55:35.528744 systemd[1719]: Reached target default.target - Main User Target. Oct 31 20:55:35.528769 systemd[1719]: Startup finished in 130ms. Oct 31 20:55:35.528972 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 20:55:35.530233 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 20:55:35.539819 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:32924.service - OpenSSH per-connection server daemon (10.0.0.1:32924). Oct 31 20:55:35.588584 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 32924 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:55:35.589858 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:55:35.593974 systemd-logind[1572]: New session 2 of user core. Oct 31 20:55:35.608258 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 20:55:35.618720 sshd[1733]: Connection closed by 10.0.0.1 port 32924 Oct 31 20:55:35.619150 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Oct 31 20:55:35.632825 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:32924.service: Deactivated successfully. Oct 31 20:55:35.635264 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 20:55:35.637139 systemd-logind[1572]: Session 2 logged out. Waiting for processes to exit. Oct 31 20:55:35.637866 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:32930.service - OpenSSH per-connection server daemon (10.0.0.1:32930). Oct 31 20:55:35.639232 systemd-logind[1572]: Removed session 2. Oct 31 20:55:35.682506 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 32930 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:55:35.683699 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:55:35.688143 systemd-logind[1572]: New session 3 of user core. Oct 31 20:55:35.703253 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 20:55:35.710526 sshd[1742]: Connection closed by 10.0.0.1 port 32930 Oct 31 20:55:35.710797 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Oct 31 20:55:35.716392 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:32930.service: Deactivated successfully. Oct 31 20:55:35.718361 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 20:55:35.720330 systemd-logind[1572]: Session 3 logged out. Waiting for processes to exit. Oct 31 20:55:35.722352 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:32936.service - OpenSSH per-connection server daemon (10.0.0.1:32936). Oct 31 20:55:35.723279 systemd-logind[1572]: Removed session 3. Oct 31 20:55:35.774949 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 32936 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:55:35.776314 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:55:35.780484 systemd-logind[1572]: New session 4 of user core. Oct 31 20:55:35.790227 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 20:55:35.800033 sshd[1751]: Connection closed by 10.0.0.1 port 32936 Oct 31 20:55:35.800319 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Oct 31 20:55:35.809809 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:32936.service: Deactivated successfully. Oct 31 20:55:35.811088 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 20:55:35.811742 systemd-logind[1572]: Session 4 logged out. Waiting for processes to exit. Oct 31 20:55:35.813885 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:32948.service - OpenSSH per-connection server daemon (10.0.0.1:32948). Oct 31 20:55:35.814608 systemd-logind[1572]: Removed session 4. Oct 31 20:55:35.866603 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 32948 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:55:35.867586 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:55:35.871897 systemd-logind[1572]: New session 5 of user core. Oct 31 20:55:35.879244 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 20:55:35.894881 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 20:55:35.895155 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 20:55:35.906887 sudo[1762]: pam_unix(sudo:session): session closed for user root Oct 31 20:55:35.908396 sshd[1761]: Connection closed by 10.0.0.1 port 32948 Oct 31 20:55:35.909025 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Oct 31 20:55:35.921778 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:32948.service: Deactivated successfully. Oct 31 20:55:35.924227 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 20:55:35.924866 systemd-logind[1572]: Session 5 logged out. Waiting for processes to exit. Oct 31 20:55:35.928349 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:32958.service - OpenSSH per-connection server daemon (10.0.0.1:32958). Oct 31 20:55:35.928801 systemd-logind[1572]: Removed session 5. Oct 31 20:55:35.984577 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 32958 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:55:35.985560 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:55:35.989557 systemd-logind[1572]: New session 6 of user core. Oct 31 20:55:36.000295 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 20:55:36.009425 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 20:55:36.009660 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 20:55:36.013690 sudo[1773]: pam_unix(sudo:session): session closed for user root Oct 31 20:55:36.018737 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 31 20:55:36.018960 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 20:55:36.027859 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 20:55:36.056078 augenrules[1795]: No rules Oct 31 20:55:36.056655 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 20:55:36.058185 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 20:55:36.059037 sudo[1772]: pam_unix(sudo:session): session closed for user root Oct 31 20:55:36.060440 sshd[1771]: Connection closed by 10.0.0.1 port 32958 Oct 31 20:55:36.060701 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Oct 31 20:55:36.075908 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:32958.service: Deactivated successfully. Oct 31 20:55:36.077261 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 20:55:36.077856 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. Oct 31 20:55:36.079944 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:32962.service - OpenSSH per-connection server daemon (10.0.0.1:32962). Oct 31 20:55:36.080644 systemd-logind[1572]: Removed session 6. Oct 31 20:55:36.135137 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 32962 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:55:36.136223 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:55:36.140163 systemd-logind[1572]: New session 7 of user core. Oct 31 20:55:36.156253 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 20:55:36.165944 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 20:55:36.166475 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 20:55:36.429567 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 20:55:36.440410 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 20:55:36.633314 dockerd[1828]: time="2025-10-31T20:55:36.633244351Z" level=info msg="Starting up" Oct 31 20:55:36.634608 dockerd[1828]: time="2025-10-31T20:55:36.634583713Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 31 20:55:36.644712 dockerd[1828]: time="2025-10-31T20:55:36.644676867Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 31 20:55:36.834184 dockerd[1828]: time="2025-10-31T20:55:36.834143954Z" level=info msg="Loading containers: start." Oct 31 20:55:36.844100 kernel: Initializing XFRM netlink socket Oct 31 20:55:37.014196 systemd-networkd[1504]: docker0: Link UP Oct 31 20:55:37.017036 dockerd[1828]: time="2025-10-31T20:55:37.016913291Z" level=info msg="Loading containers: done." Oct 31 20:55:37.027885 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4106442669-merged.mount: Deactivated successfully. Oct 31 20:55:37.030821 dockerd[1828]: time="2025-10-31T20:55:37.029051400Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 20:55:37.030902 dockerd[1828]: time="2025-10-31T20:55:37.030859539Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 31 20:55:37.031067 dockerd[1828]: time="2025-10-31T20:55:37.031033203Z" level=info msg="Initializing buildkit" Oct 31 20:55:37.050209 dockerd[1828]: time="2025-10-31T20:55:37.050180405Z" level=info msg="Completed buildkit initialization" Oct 31 20:55:37.054755 dockerd[1828]: time="2025-10-31T20:55:37.054717215Z" level=info msg="Daemon has completed initialization" Oct 31 20:55:37.055186 dockerd[1828]: time="2025-10-31T20:55:37.054806986Z" level=info msg="API listen on /run/docker.sock" Oct 31 20:55:37.054969 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 20:55:37.695618 containerd[1586]: time="2025-10-31T20:55:37.695576675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 31 20:55:38.632780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779727872.mount: Deactivated successfully. Oct 31 20:55:39.157972 containerd[1586]: time="2025-10-31T20:55:39.157920343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:39.158537 containerd[1586]: time="2025-10-31T20:55:39.158488722Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=24783206" Oct 31 20:55:39.159208 containerd[1586]: time="2025-10-31T20:55:39.159156980Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:39.162420 containerd[1586]: time="2025-10-31T20:55:39.162366493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:39.164112 containerd[1586]: time="2025-10-31T20:55:39.163916442Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.468298913s" Oct 31 20:55:39.164112 containerd[1586]: time="2025-10-31T20:55:39.163947898Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 31 20:55:39.164726 containerd[1586]: time="2025-10-31T20:55:39.164689486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 31 20:55:40.372365 containerd[1586]: time="2025-10-31T20:55:40.372315341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:40.373226 containerd[1586]: time="2025-10-31T20:55:40.373183736Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22523051" Oct 31 20:55:40.373883 containerd[1586]: time="2025-10-31T20:55:40.373859883Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:40.377204 containerd[1586]: time="2025-10-31T20:55:40.377174510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:40.378700 containerd[1586]: time="2025-10-31T20:55:40.378658857Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.213887757s" Oct 31 20:55:40.378700 containerd[1586]: time="2025-10-31T20:55:40.378695409Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 31 20:55:40.379235 containerd[1586]: time="2025-10-31T20:55:40.379167364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 31 20:55:41.640335 containerd[1586]: time="2025-10-31T20:55:41.639550368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:41.640335 containerd[1586]: time="2025-10-31T20:55:41.640113739Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17476195" Oct 31 20:55:41.641454 containerd[1586]: time="2025-10-31T20:55:41.641424748Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:41.645807 containerd[1586]: time="2025-10-31T20:55:41.645778446Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.266584305s" Oct 31 20:55:41.645909 containerd[1586]: time="2025-10-31T20:55:41.645894136Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 31 20:55:41.646004 containerd[1586]: time="2025-10-31T20:55:41.645966291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:41.647225 containerd[1586]: time="2025-10-31T20:55:41.647198593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 31 20:55:42.645830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2900484323.mount: Deactivated successfully. Oct 31 20:55:42.976876 containerd[1586]: time="2025-10-31T20:55:42.976757303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:42.977538 containerd[1586]: time="2025-10-31T20:55:42.977489963Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27414096" Oct 31 20:55:42.978376 containerd[1586]: time="2025-10-31T20:55:42.978323247Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:42.982112 containerd[1586]: time="2025-10-31T20:55:42.980601277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:42.982112 containerd[1586]: time="2025-10-31T20:55:42.981992013Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.334759698s" Oct 31 20:55:42.982112 containerd[1586]: time="2025-10-31T20:55:42.982023357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 31 20:55:42.982574 containerd[1586]: time="2025-10-31T20:55:42.982539298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 31 20:55:43.601526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760738579.mount: Deactivated successfully. Oct 31 20:55:44.027591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 20:55:44.029019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 20:55:44.168737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 20:55:44.181361 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 20:55:44.222958 kubelet[2182]: E1031 20:55:44.222908 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 20:55:44.227012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 20:55:44.227152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 20:55:44.229162 systemd[1]: kubelet.service: Consumed 144ms CPU time, 108.1M memory peak. Oct 31 20:55:44.397933 containerd[1586]: time="2025-10-31T20:55:44.397802784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:44.400058 containerd[1586]: time="2025-10-31T20:55:44.400005561Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Oct 31 20:55:44.402314 containerd[1586]: time="2025-10-31T20:55:44.402278542Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:44.405702 containerd[1586]: time="2025-10-31T20:55:44.405656166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:44.406678 containerd[1586]: time="2025-10-31T20:55:44.406632005Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.424054013s" Oct 31 20:55:44.406678 containerd[1586]: time="2025-10-31T20:55:44.406668031Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 31 20:55:44.407083 containerd[1586]: time="2025-10-31T20:55:44.407017448Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 20:55:44.830475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1448551399.mount: Deactivated successfully. Oct 31 20:55:44.835541 containerd[1586]: time="2025-10-31T20:55:44.835503915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 20:55:44.836072 containerd[1586]: time="2025-10-31T20:55:44.836027439Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 31 20:55:44.836944 containerd[1586]: time="2025-10-31T20:55:44.836896445Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 20:55:44.839043 containerd[1586]: time="2025-10-31T20:55:44.838998613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 20:55:44.839950 containerd[1586]: time="2025-10-31T20:55:44.839588326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 432.539711ms" Oct 31 20:55:44.839950 containerd[1586]: time="2025-10-31T20:55:44.839617725Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 31 20:55:44.840278 containerd[1586]: time="2025-10-31T20:55:44.840235230Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 31 20:55:45.464551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619272802.mount: Deactivated successfully. Oct 31 20:55:47.786895 containerd[1586]: time="2025-10-31T20:55:47.785935868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:47.786895 containerd[1586]: time="2025-10-31T20:55:47.786413306Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Oct 31 20:55:47.787562 containerd[1586]: time="2025-10-31T20:55:47.787530724Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:47.790129 containerd[1586]: time="2025-10-31T20:55:47.790073510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:55:47.791449 containerd[1586]: time="2025-10-31T20:55:47.791299491Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.950994384s" Oct 31 20:55:47.791449 containerd[1586]: time="2025-10-31T20:55:47.791334007Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 31 20:55:52.579523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 20:55:52.579667 systemd[1]: kubelet.service: Consumed 144ms CPU time, 108.1M memory peak. Oct 31 20:55:52.581552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 20:55:52.606195 systemd[1]: Reload requested from client PID 2282 ('systemctl') (unit session-7.scope)... Oct 31 20:55:52.606211 systemd[1]: Reloading... Oct 31 20:55:52.685130 zram_generator::config[2333]: No configuration found. Oct 31 20:55:52.888547 systemd[1]: Reloading finished in 282 ms. Oct 31 20:55:52.937476 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 20:55:52.937545 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 20:55:52.937799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 20:55:52.937838 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.1M memory peak. Oct 31 20:55:52.939135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 20:55:53.052749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 20:55:53.058037 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 20:55:53.092207 kubelet[2372]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 20:55:53.092207 kubelet[2372]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 20:55:53.092207 kubelet[2372]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 20:55:53.092492 kubelet[2372]: I1031 20:55:53.092261 2372 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 20:55:54.001284 kubelet[2372]: I1031 20:55:54.001234 2372 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 20:55:54.001284 kubelet[2372]: I1031 20:55:54.001270 2372 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 20:55:54.001543 kubelet[2372]: I1031 20:55:54.001526 2372 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 20:55:54.025198 kubelet[2372]: E1031 20:55:54.025131 2372 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 31 20:55:54.026170 kubelet[2372]: I1031 20:55:54.026061 2372 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 20:55:54.031701 kubelet[2372]: I1031 20:55:54.031670 2372 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 20:55:54.035254 kubelet[2372]: I1031 20:55:54.035233 2372 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 20:55:54.035900 kubelet[2372]: I1031 20:55:54.035858 2372 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 20:55:54.036069 kubelet[2372]: I1031 20:55:54.035907 2372 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 20:55:54.036180 kubelet[2372]: I1031 20:55:54.036164 2372 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 20:55:54.036180 kubelet[2372]: I1031 20:55:54.036179 2372 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 20:55:54.036380 kubelet[2372]: I1031 20:55:54.036367 2372 state_mem.go:36] "Initialized new in-memory state store" Oct 31 20:55:54.038772 kubelet[2372]: I1031 20:55:54.038742 2372 kubelet.go:446] "Attempting to sync node with API server" Oct 31 20:55:54.038900 kubelet[2372]: I1031 20:55:54.038772 2372 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 20:55:54.038900 kubelet[2372]: I1031 20:55:54.038891 2372 kubelet.go:352] "Adding apiserver pod source" Oct 31 20:55:54.039359 kubelet[2372]: I1031 20:55:54.039321 2372 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 20:55:54.043515 kubelet[2372]: W1031 20:55:54.043468 2372 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 31 20:55:54.043593 kubelet[2372]: W1031 20:55:54.043513 2372 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 31 20:55:54.043593 kubelet[2372]: E1031 20:55:54.043528 2372 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 31 20:55:54.043593 kubelet[2372]: E1031 20:55:54.043562 2372 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 31 20:55:54.043653 kubelet[2372]: I1031 20:55:54.043613 2372 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 31 20:55:54.045827 kubelet[2372]: I1031 20:55:54.044315 2372 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 20:55:54.045827 kubelet[2372]: W1031 20:55:54.044438 2372 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 20:55:54.045827 kubelet[2372]: I1031 20:55:54.045622 2372 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 20:55:54.045827 kubelet[2372]: I1031 20:55:54.045651 2372 server.go:1287] "Started kubelet" Oct 31 20:55:54.046688 kubelet[2372]: I1031 20:55:54.046650 2372 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 20:55:54.048819 kubelet[2372]: I1031 20:55:54.048347 2372 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 20:55:54.048819 kubelet[2372]: I1031 20:55:54.048579 2372 server.go:479] "Adding debug handlers to kubelet server" Oct 31 20:55:54.048819 kubelet[2372]: I1031 20:55:54.048686 2372 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 20:55:54.049797 kubelet[2372]: I1031 20:55:54.049775 2372 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 20:55:54.050394 kubelet[2372]: I1031 20:55:54.050367 2372 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 20:55:54.051390 kubelet[2372]: I1031 20:55:54.051374 2372 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 20:55:54.051653 kubelet[2372]: E1031 20:55:54.051635 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 20:55:54.052873 kubelet[2372]: E1031 20:55:54.052621 2372 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873aedf77f75596 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 20:55:54.045633942 +0000 UTC m=+0.984508215,LastTimestamp:2025-10-31 20:55:54.045633942 +0000 UTC m=+0.984508215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 20:55:54.052996 kubelet[2372]: I1031 20:55:54.052854 2372 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 20:55:54.053041 kubelet[2372]: I1031 20:55:54.053011 2372 factory.go:221] Registration of the systemd container factory successfully Oct 31 20:55:54.053331 kubelet[2372]: I1031 20:55:54.053206 2372 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 20:55:54.053563 kubelet[2372]: I1031 20:55:54.053549 2372 reconciler.go:26] "Reconciler: start to sync state" Oct 31 20:55:54.054710 kubelet[2372]: W1031 20:55:54.054676 2372 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 31 20:55:54.054847 kubelet[2372]: E1031 20:55:54.054815 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="200ms" Oct 31 20:55:54.054921 kubelet[2372]: E1031 20:55:54.054904 2372 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 31 20:55:54.055696 kubelet[2372]: I1031 20:55:54.055674 2372 factory.go:221] Registration of the containerd container factory successfully Oct 31 20:55:54.057478 kubelet[2372]: E1031 20:55:54.057453 2372 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 20:55:54.065165 kubelet[2372]: I1031 20:55:54.065063 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 20:55:54.066746 kubelet[2372]: I1031 20:55:54.066673 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 20:55:54.066746 kubelet[2372]: I1031 20:55:54.066718 2372 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 20:55:54.066746 kubelet[2372]: I1031 20:55:54.066750 2372 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 20:55:54.066875 kubelet[2372]: I1031 20:55:54.066757 2372 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 20:55:54.066875 kubelet[2372]: E1031 20:55:54.066808 2372 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 20:55:54.067917 kubelet[2372]: W1031 20:55:54.067838 2372 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 31 20:55:54.067917 kubelet[2372]: E1031 20:55:54.067877 2372 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 31 20:55:54.068763 kubelet[2372]: I1031 20:55:54.068746 2372 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 20:55:54.068923 kubelet[2372]: I1031 20:55:54.068850 2372 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 20:55:54.068923 kubelet[2372]: I1031 20:55:54.068872 2372 state_mem.go:36] "Initialized new in-memory state store" Oct 31 20:55:54.147673 kubelet[2372]: I1031 20:55:54.147639 2372 policy_none.go:49] "None policy: Start" Oct 31 20:55:54.148284 kubelet[2372]: I1031 20:55:54.148053 2372 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 20:55:54.148284 kubelet[2372]: I1031 20:55:54.148079 2372 state_mem.go:35] "Initializing new in-memory state store" Oct 31 20:55:54.154114 kubelet[2372]: E1031 20:55:54.154085 2372 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 20:55:54.157031 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 20:55:54.167337 kubelet[2372]: E1031 20:55:54.167307 2372 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 20:55:54.173580 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 20:55:54.177715 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 20:55:54.197015 kubelet[2372]: I1031 20:55:54.196874 2372 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 20:55:54.197126 kubelet[2372]: I1031 20:55:54.197109 2372 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 20:55:54.197172 kubelet[2372]: I1031 20:55:54.197138 2372 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 20:55:54.197395 kubelet[2372]: I1031 20:55:54.197375 2372 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 20:55:54.198385 kubelet[2372]: E1031 20:55:54.198337 2372 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 20:55:54.198534 kubelet[2372]: E1031 20:55:54.198463 2372 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 20:55:54.255912 kubelet[2372]: E1031 20:55:54.255817 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="400ms" Oct 31 20:55:54.299985 kubelet[2372]: I1031 20:55:54.299948 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 20:55:54.300438 kubelet[2372]: E1031 20:55:54.300407 2372 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 31 20:55:54.376236 systemd[1]: Created slice kubepods-burstable-pod8e723b0d91a955e221ef9de3249cead2.slice - libcontainer container kubepods-burstable-pod8e723b0d91a955e221ef9de3249cead2.slice. Oct 31 20:55:54.397788 kubelet[2372]: E1031 20:55:54.397764 2372 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 20:55:54.401314 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 31 20:55:54.417497 kubelet[2372]: E1031 20:55:54.417467 2372 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 20:55:54.420428 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 31 20:55:54.422347 kubelet[2372]: E1031 20:55:54.422324 2372 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 20:55:54.455954 kubelet[2372]: I1031 20:55:54.455721 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e723b0d91a955e221ef9de3249cead2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e723b0d91a955e221ef9de3249cead2\") " pod="kube-system/kube-apiserver-localhost" Oct 31 20:55:54.455954 kubelet[2372]: I1031 20:55:54.455762 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:54.455954 kubelet[2372]: I1031 20:55:54.455778 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:54.455954 kubelet[2372]: I1031 20:55:54.455795 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:54.455954 kubelet[2372]: I1031 20:55:54.455811 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 20:55:54.456180 kubelet[2372]: I1031 20:55:54.455825 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e723b0d91a955e221ef9de3249cead2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e723b0d91a955e221ef9de3249cead2\") " pod="kube-system/kube-apiserver-localhost" Oct 31 20:55:54.456180 kubelet[2372]: I1031 20:55:54.455845 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e723b0d91a955e221ef9de3249cead2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e723b0d91a955e221ef9de3249cead2\") " pod="kube-system/kube-apiserver-localhost" Oct 31 20:55:54.456180 kubelet[2372]: I1031 20:55:54.455859 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:54.456180 kubelet[2372]: I1031 20:55:54.455874 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:54.501968 kubelet[2372]: I1031 20:55:54.501926 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 20:55:54.502391 kubelet[2372]: E1031 20:55:54.502342 2372 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 31 20:55:54.657307 kubelet[2372]: E1031 20:55:54.657263 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="800ms" Oct 31 20:55:54.698543 kubelet[2372]: E1031 20:55:54.698506 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:54.699195 containerd[1586]: time="2025-10-31T20:55:54.699113595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e723b0d91a955e221ef9de3249cead2,Namespace:kube-system,Attempt:0,}" Oct 31 20:55:54.718525 containerd[1586]: time="2025-10-31T20:55:54.717740826Z" level=info msg="connecting to shim 134af1277376ac487f2182e004d4b9cb282962e0ab94aa72f397cefa1dc3e91a" address="unix:///run/containerd/s/f546ba5b299f09d91f8539bc6a32a508e14b88e693aeea1000ba037ed6b24006" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:55:54.718626 kubelet[2372]: E1031 20:55:54.718426 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:54.718871 containerd[1586]: time="2025-10-31T20:55:54.718839689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 31 20:55:54.726315 kubelet[2372]: E1031 20:55:54.723343 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:54.726391 containerd[1586]: time="2025-10-31T20:55:54.723760018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 31 20:55:54.744016 containerd[1586]: time="2025-10-31T20:55:54.743980085Z" level=info msg="connecting to shim a0a90def2da1f52da47ced0fc37236856a13f063dc14d2e0d5a5339c2aded2e3" address="unix:///run/containerd/s/cb2ad7b3b6e1ce488c599b31b8b5ecd78cb2dddb9eddba40b5ccfbd63064526d" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:55:54.757231 containerd[1586]: time="2025-10-31T20:55:54.757185557Z" level=info msg="connecting to shim 5ca1f96241c343db00dfe329142eec410ff59ca7b12c7e29d69322da14dbab68" address="unix:///run/containerd/s/821b68b02949babd2354d958864121cb3e8ac6fdd56aa2928bc70d80c1997532" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:55:54.765259 systemd[1]: Started cri-containerd-134af1277376ac487f2182e004d4b9cb282962e0ab94aa72f397cefa1dc3e91a.scope - libcontainer container 134af1277376ac487f2182e004d4b9cb282962e0ab94aa72f397cefa1dc3e91a. Oct 31 20:55:54.773218 systemd[1]: Started cri-containerd-a0a90def2da1f52da47ced0fc37236856a13f063dc14d2e0d5a5339c2aded2e3.scope - libcontainer container a0a90def2da1f52da47ced0fc37236856a13f063dc14d2e0d5a5339c2aded2e3. Oct 31 20:55:54.776712 systemd[1]: Started cri-containerd-5ca1f96241c343db00dfe329142eec410ff59ca7b12c7e29d69322da14dbab68.scope - libcontainer container 5ca1f96241c343db00dfe329142eec410ff59ca7b12c7e29d69322da14dbab68. Oct 31 20:55:54.816761 containerd[1586]: time="2025-10-31T20:55:54.816719235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8e723b0d91a955e221ef9de3249cead2,Namespace:kube-system,Attempt:0,} returns sandbox id \"134af1277376ac487f2182e004d4b9cb282962e0ab94aa72f397cefa1dc3e91a\"" Oct 31 20:55:54.818761 kubelet[2372]: E1031 20:55:54.818732 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:54.819929 containerd[1586]: time="2025-10-31T20:55:54.819281738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0a90def2da1f52da47ced0fc37236856a13f063dc14d2e0d5a5339c2aded2e3\"" Oct 31 20:55:54.820046 kubelet[2372]: E1031 20:55:54.820019 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:54.821845 containerd[1586]: time="2025-10-31T20:55:54.821816502Z" level=info msg="CreateContainer within sandbox \"a0a90def2da1f52da47ced0fc37236856a13f063dc14d2e0d5a5339c2aded2e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 20:55:54.822650 containerd[1586]: time="2025-10-31T20:55:54.822033605Z" level=info msg="CreateContainer within sandbox \"134af1277376ac487f2182e004d4b9cb282962e0ab94aa72f397cefa1dc3e91a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 20:55:54.825786 containerd[1586]: time="2025-10-31T20:55:54.825757103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ca1f96241c343db00dfe329142eec410ff59ca7b12c7e29d69322da14dbab68\"" Oct 31 20:55:54.826419 kubelet[2372]: E1031 20:55:54.826391 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:54.827799 containerd[1586]: time="2025-10-31T20:55:54.827768591Z" level=info msg="CreateContainer within sandbox \"5ca1f96241c343db00dfe329142eec410ff59ca7b12c7e29d69322da14dbab68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 20:55:54.833114 containerd[1586]: time="2025-10-31T20:55:54.833065323Z" level=info msg="Container 2bc94e1cce7f476fb99166225e446cf1c69a344201d25cc98f2a8d25ce210830: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:55:54.836005 containerd[1586]: time="2025-10-31T20:55:54.835968191Z" level=info msg="Container 1743baef1a70828e91d784c1e6a643d7f762250d6267b4fd6203fff1c783fcf9: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:55:54.839838 containerd[1586]: time="2025-10-31T20:55:54.839746927Z" level=info msg="Container 24906a805bc8a3f6e45f973a473d3e051975ac41ef7c2fb4d5fd60a076d64f45: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:55:54.843536 containerd[1586]: time="2025-10-31T20:55:54.843497002Z" level=info msg="CreateContainer within sandbox \"a0a90def2da1f52da47ced0fc37236856a13f063dc14d2e0d5a5339c2aded2e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2bc94e1cce7f476fb99166225e446cf1c69a344201d25cc98f2a8d25ce210830\"" Oct 31 20:55:54.844308 containerd[1586]: time="2025-10-31T20:55:54.844279991Z" level=info msg="StartContainer for \"2bc94e1cce7f476fb99166225e446cf1c69a344201d25cc98f2a8d25ce210830\"" Oct 31 20:55:54.845713 containerd[1586]: time="2025-10-31T20:55:54.845395850Z" level=info msg="connecting to shim 2bc94e1cce7f476fb99166225e446cf1c69a344201d25cc98f2a8d25ce210830" address="unix:///run/containerd/s/cb2ad7b3b6e1ce488c599b31b8b5ecd78cb2dddb9eddba40b5ccfbd63064526d" protocol=ttrpc version=3 Oct 31 20:55:54.846529 containerd[1586]: time="2025-10-31T20:55:54.846501527Z" level=info msg="CreateContainer within sandbox \"134af1277376ac487f2182e004d4b9cb282962e0ab94aa72f397cefa1dc3e91a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1743baef1a70828e91d784c1e6a643d7f762250d6267b4fd6203fff1c783fcf9\"" Oct 31 20:55:54.846973 containerd[1586]: time="2025-10-31T20:55:54.846944632Z" level=info msg="StartContainer for \"1743baef1a70828e91d784c1e6a643d7f762250d6267b4fd6203fff1c783fcf9\"" Oct 31 20:55:54.849125 containerd[1586]: time="2025-10-31T20:55:54.848359368Z" level=info msg="CreateContainer within sandbox \"5ca1f96241c343db00dfe329142eec410ff59ca7b12c7e29d69322da14dbab68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"24906a805bc8a3f6e45f973a473d3e051975ac41ef7c2fb4d5fd60a076d64f45\"" Oct 31 20:55:54.849125 containerd[1586]: time="2025-10-31T20:55:54.848763349Z" level=info msg="connecting to shim 1743baef1a70828e91d784c1e6a643d7f762250d6267b4fd6203fff1c783fcf9" address="unix:///run/containerd/s/f546ba5b299f09d91f8539bc6a32a508e14b88e693aeea1000ba037ed6b24006" protocol=ttrpc version=3 Oct 31 20:55:54.849327 containerd[1586]: time="2025-10-31T20:55:54.849303180Z" level=info msg="StartContainer for \"24906a805bc8a3f6e45f973a473d3e051975ac41ef7c2fb4d5fd60a076d64f45\"" Oct 31 20:55:54.850361 containerd[1586]: time="2025-10-31T20:55:54.850325880Z" level=info msg="connecting to shim 24906a805bc8a3f6e45f973a473d3e051975ac41ef7c2fb4d5fd60a076d64f45" address="unix:///run/containerd/s/821b68b02949babd2354d958864121cb3e8ac6fdd56aa2928bc70d80c1997532" protocol=ttrpc version=3 Oct 31 20:55:54.868259 systemd[1]: Started cri-containerd-2bc94e1cce7f476fb99166225e446cf1c69a344201d25cc98f2a8d25ce210830.scope - libcontainer container 2bc94e1cce7f476fb99166225e446cf1c69a344201d25cc98f2a8d25ce210830. Oct 31 20:55:54.872554 systemd[1]: Started cri-containerd-1743baef1a70828e91d784c1e6a643d7f762250d6267b4fd6203fff1c783fcf9.scope - libcontainer container 1743baef1a70828e91d784c1e6a643d7f762250d6267b4fd6203fff1c783fcf9. Oct 31 20:55:54.873775 systemd[1]: Started cri-containerd-24906a805bc8a3f6e45f973a473d3e051975ac41ef7c2fb4d5fd60a076d64f45.scope - libcontainer container 24906a805bc8a3f6e45f973a473d3e051975ac41ef7c2fb4d5fd60a076d64f45. Oct 31 20:55:54.903694 kubelet[2372]: I1031 20:55:54.903664 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 20:55:54.904206 kubelet[2372]: E1031 20:55:54.904180 2372 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 31 20:55:54.929116 containerd[1586]: time="2025-10-31T20:55:54.928912016Z" level=info msg="StartContainer for \"24906a805bc8a3f6e45f973a473d3e051975ac41ef7c2fb4d5fd60a076d64f45\" returns successfully" Oct 31 20:55:54.932480 containerd[1586]: time="2025-10-31T20:55:54.932439977Z" level=info msg="StartContainer for \"2bc94e1cce7f476fb99166225e446cf1c69a344201d25cc98f2a8d25ce210830\" returns successfully" Oct 31 20:55:54.935325 containerd[1586]: time="2025-10-31T20:55:54.935295384Z" level=info msg="StartContainer for \"1743baef1a70828e91d784c1e6a643d7f762250d6267b4fd6203fff1c783fcf9\" returns successfully" Oct 31 20:55:55.010212 kubelet[2372]: W1031 20:55:55.010147 2372 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 31 20:55:55.010316 kubelet[2372]: E1031 20:55:55.010226 2372 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 31 20:55:55.074208 kubelet[2372]: E1031 20:55:55.074180 2372 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 20:55:55.074324 kubelet[2372]: E1031 20:55:55.074305 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:55.075701 kubelet[2372]: E1031 20:55:55.075327 2372 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 20:55:55.075701 kubelet[2372]: E1031 20:55:55.075472 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:55.078349 kubelet[2372]: E1031 20:55:55.078328 2372 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 20:55:55.078449 kubelet[2372]: E1031 20:55:55.078433 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:55.706310 kubelet[2372]: I1031 20:55:55.706280 2372 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 20:55:56.079152 kubelet[2372]: E1031 20:55:56.079117 2372 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 20:55:56.079256 kubelet[2372]: E1031 20:55:56.079233 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:56.079371 kubelet[2372]: E1031 20:55:56.079355 2372 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 20:55:56.079459 kubelet[2372]: E1031 20:55:56.079445 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:56.956881 kubelet[2372]: E1031 20:55:56.956838 2372 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 20:55:56.988800 kubelet[2372]: I1031 20:55:56.988766 2372 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 20:55:57.040790 kubelet[2372]: I1031 20:55:57.040752 2372 apiserver.go:52] "Watching apiserver" Oct 31 20:55:57.053684 kubelet[2372]: I1031 20:55:57.053648 2372 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:57.053824 kubelet[2372]: I1031 20:55:57.053654 2372 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 20:55:57.111105 kubelet[2372]: E1031 20:55:57.111045 2372 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:57.111105 kubelet[2372]: I1031 20:55:57.111077 2372 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 20:55:57.115384 kubelet[2372]: E1031 20:55:57.115335 2372 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 20:55:57.115384 kubelet[2372]: I1031 20:55:57.115372 2372 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 20:55:57.117891 kubelet[2372]: E1031 20:55:57.117848 2372 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 20:55:58.992507 systemd[1]: Reload requested from client PID 2646 ('systemctl') (unit session-7.scope)... Oct 31 20:55:58.992527 systemd[1]: Reloading... Oct 31 20:55:59.055150 zram_generator::config[2693]: No configuration found. Oct 31 20:55:59.240208 systemd[1]: Reloading finished in 247 ms. Oct 31 20:55:59.268347 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 20:55:59.280970 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 20:55:59.283154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 20:55:59.283218 systemd[1]: kubelet.service: Consumed 1.365s CPU time, 128.1M memory peak. Oct 31 20:55:59.285013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 20:55:59.432371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 20:55:59.443472 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 20:55:59.490041 kubelet[2732]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 20:55:59.490041 kubelet[2732]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 20:55:59.490041 kubelet[2732]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 20:55:59.490473 kubelet[2732]: I1031 20:55:59.490082 2732 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 20:55:59.495822 kubelet[2732]: I1031 20:55:59.495769 2732 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 20:55:59.495822 kubelet[2732]: I1031 20:55:59.495813 2732 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 20:55:59.496114 kubelet[2732]: I1031 20:55:59.496080 2732 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 20:55:59.497401 kubelet[2732]: I1031 20:55:59.497382 2732 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 31 20:55:59.499750 kubelet[2732]: I1031 20:55:59.499689 2732 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 20:55:59.504497 kubelet[2732]: I1031 20:55:59.504471 2732 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 20:55:59.507051 kubelet[2732]: I1031 20:55:59.507027 2732 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 20:55:59.507295 kubelet[2732]: I1031 20:55:59.507266 2732 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 20:55:59.507495 kubelet[2732]: I1031 20:55:59.507298 2732 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 20:55:59.507571 kubelet[2732]: I1031 20:55:59.507502 2732 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 20:55:59.507571 kubelet[2732]: I1031 20:55:59.507511 2732 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 20:55:59.507571 kubelet[2732]: I1031 20:55:59.507551 2732 state_mem.go:36] "Initialized new in-memory state store" Oct 31 20:55:59.507687 kubelet[2732]: I1031 20:55:59.507675 2732 kubelet.go:446] "Attempting to sync node with API server" Oct 31 20:55:59.507712 kubelet[2732]: I1031 20:55:59.507690 2732 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 20:55:59.507712 kubelet[2732]: I1031 20:55:59.507712 2732 kubelet.go:352] "Adding apiserver pod source" Oct 31 20:55:59.507753 kubelet[2732]: I1031 20:55:59.507721 2732 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 20:55:59.512248 kubelet[2732]: I1031 20:55:59.512222 2732 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Oct 31 20:55:59.514453 kubelet[2732]: I1031 20:55:59.514423 2732 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 20:55:59.514936 kubelet[2732]: I1031 20:55:59.514909 2732 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 20:55:59.514990 kubelet[2732]: I1031 20:55:59.514949 2732 server.go:1287] "Started kubelet" Oct 31 20:55:59.515181 kubelet[2732]: I1031 20:55:59.515151 2732 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 20:55:59.516509 kubelet[2732]: I1031 20:55:59.516484 2732 server.go:479] "Adding debug handlers to kubelet server" Oct 31 20:55:59.517229 kubelet[2732]: I1031 20:55:59.517199 2732 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 20:55:59.517579 kubelet[2732]: I1031 20:55:59.515205 2732 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 20:55:59.517837 kubelet[2732]: I1031 20:55:59.517808 2732 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 20:55:59.518190 kubelet[2732]: I1031 20:55:59.518167 2732 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 20:55:59.521766 kubelet[2732]: I1031 20:55:59.521009 2732 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 20:55:59.521766 kubelet[2732]: E1031 20:55:59.521379 2732 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 20:55:59.523058 kubelet[2732]: I1031 20:55:59.522879 2732 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 20:55:59.523058 kubelet[2732]: I1031 20:55:59.523025 2732 reconciler.go:26] "Reconciler: start to sync state" Oct 31 20:55:59.530234 kubelet[2732]: I1031 20:55:59.530185 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 20:55:59.532192 kubelet[2732]: I1031 20:55:59.532160 2732 factory.go:221] Registration of the systemd container factory successfully Oct 31 20:55:59.532811 kubelet[2732]: I1031 20:55:59.532773 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 20:55:59.532871 kubelet[2732]: I1031 20:55:59.532832 2732 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 20:55:59.532871 kubelet[2732]: I1031 20:55:59.532862 2732 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 20:55:59.532871 kubelet[2732]: I1031 20:55:59.532870 2732 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 20:55:59.533323 kubelet[2732]: E1031 20:55:59.532930 2732 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 20:55:59.533762 kubelet[2732]: I1031 20:55:59.533472 2732 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 20:55:59.536397 kubelet[2732]: I1031 20:55:59.536368 2732 factory.go:221] Registration of the containerd container factory successfully Oct 31 20:55:59.576796 kubelet[2732]: I1031 20:55:59.576769 2732 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 20:55:59.576952 kubelet[2732]: I1031 20:55:59.576937 2732 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 20:55:59.577013 kubelet[2732]: I1031 20:55:59.577004 2732 state_mem.go:36] "Initialized new in-memory state store" Oct 31 20:55:59.577249 kubelet[2732]: I1031 20:55:59.577232 2732 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 20:55:59.577330 kubelet[2732]: I1031 20:55:59.577308 2732 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 20:55:59.577376 kubelet[2732]: I1031 20:55:59.577369 2732 policy_none.go:49] "None policy: Start" Oct 31 20:55:59.577424 kubelet[2732]: I1031 20:55:59.577416 2732 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 20:55:59.577482 kubelet[2732]: I1031 20:55:59.577473 2732 state_mem.go:35] "Initializing new in-memory state store" Oct 31 20:55:59.577633 kubelet[2732]: I1031 20:55:59.577622 2732 state_mem.go:75] "Updated machine memory state" Oct 31 20:55:59.581911 kubelet[2732]: I1031 20:55:59.581884 2732 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 20:55:59.582096 kubelet[2732]: I1031 20:55:59.582063 2732 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 20:55:59.582130 kubelet[2732]: I1031 20:55:59.582081 2732 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 20:55:59.582315 kubelet[2732]: I1031 20:55:59.582296 2732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 20:55:59.584238 kubelet[2732]: E1031 20:55:59.584209 2732 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 20:55:59.634557 kubelet[2732]: I1031 20:55:59.634514 2732 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 20:55:59.634685 kubelet[2732]: I1031 20:55:59.634640 2732 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 20:55:59.635539 kubelet[2732]: I1031 20:55:59.634860 2732 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:59.686560 kubelet[2732]: I1031 20:55:59.686530 2732 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 20:55:59.694198 kubelet[2732]: I1031 20:55:59.694143 2732 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 20:55:59.694322 kubelet[2732]: I1031 20:55:59.694256 2732 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 20:55:59.724487 kubelet[2732]: I1031 20:55:59.724445 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 20:55:59.724687 kubelet[2732]: I1031 20:55:59.724554 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e723b0d91a955e221ef9de3249cead2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e723b0d91a955e221ef9de3249cead2\") " pod="kube-system/kube-apiserver-localhost" Oct 31 20:55:59.724825 kubelet[2732]: I1031 20:55:59.724585 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e723b0d91a955e221ef9de3249cead2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8e723b0d91a955e221ef9de3249cead2\") " pod="kube-system/kube-apiserver-localhost" Oct 31 20:55:59.724825 kubelet[2732]: I1031 20:55:59.724778 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:59.724825 kubelet[2732]: I1031 20:55:59.724797 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:59.725013 kubelet[2732]: I1031 20:55:59.724812 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:59.725013 kubelet[2732]: I1031 20:55:59.724982 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e723b0d91a955e221ef9de3249cead2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8e723b0d91a955e221ef9de3249cead2\") " pod="kube-system/kube-apiserver-localhost" Oct 31 20:55:59.725139 kubelet[2732]: I1031 20:55:59.724998 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:59.725273 kubelet[2732]: I1031 20:55:59.725223 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 20:55:59.940535 kubelet[2732]: E1031 20:55:59.940488 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:59.940535 kubelet[2732]: E1031 20:55:59.940516 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:55:59.940802 kubelet[2732]: E1031 20:55:59.940499 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:00.508197 kubelet[2732]: I1031 20:56:00.508143 2732 apiserver.go:52] "Watching apiserver" Oct 31 20:56:00.524125 kubelet[2732]: I1031 20:56:00.523402 2732 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 20:56:00.560342 kubelet[2732]: E1031 20:56:00.560312 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:00.560499 kubelet[2732]: E1031 20:56:00.560474 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:00.560555 kubelet[2732]: E1031 20:56:00.560503 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:00.615898 kubelet[2732]: I1031 20:56:00.615831 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.615812924 podStartE2EDuration="1.615812924s" podCreationTimestamp="2025-10-31 20:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 20:56:00.60356206 +0000 UTC m=+1.156860316" watchObservedRunningTime="2025-10-31 20:56:00.615812924 +0000 UTC m=+1.169111180" Oct 31 20:56:00.624466 kubelet[2732]: I1031 20:56:00.624411 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.624393352 podStartE2EDuration="1.624393352s" podCreationTimestamp="2025-10-31 20:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 20:56:00.623847962 +0000 UTC m=+1.177146218" watchObservedRunningTime="2025-10-31 20:56:00.624393352 +0000 UTC m=+1.177691608" Oct 31 20:56:00.624612 kubelet[2732]: I1031 20:56:00.624563 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.62455763 podStartE2EDuration="1.62455763s" podCreationTimestamp="2025-10-31 20:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 20:56:00.616060523 +0000 UTC m=+1.169358819" watchObservedRunningTime="2025-10-31 20:56:00.62455763 +0000 UTC m=+1.177855886" Oct 31 20:56:01.562118 kubelet[2732]: E1031 20:56:01.561973 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:01.562672 kubelet[2732]: E1031 20:56:01.562598 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:02.564330 kubelet[2732]: E1031 20:56:02.564300 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:03.694312 kubelet[2732]: I1031 20:56:03.694278 2732 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 20:56:03.695353 containerd[1586]: time="2025-10-31T20:56:03.694881909Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 20:56:03.695613 kubelet[2732]: I1031 20:56:03.695507 2732 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 20:56:04.740695 systemd[1]: Created slice kubepods-besteffort-pod8b61b10a_a445_4cc8_9794_aa92970799a6.slice - libcontainer container kubepods-besteffort-pod8b61b10a_a445_4cc8_9794_aa92970799a6.slice. Oct 31 20:56:04.757015 kubelet[2732]: I1031 20:56:04.756919 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b61b10a-a445-4cc8-9794-aa92970799a6-xtables-lock\") pod \"kube-proxy-pj6st\" (UID: \"8b61b10a-a445-4cc8-9794-aa92970799a6\") " pod="kube-system/kube-proxy-pj6st" Oct 31 20:56:04.757015 kubelet[2732]: I1031 20:56:04.756954 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b61b10a-a445-4cc8-9794-aa92970799a6-lib-modules\") pod \"kube-proxy-pj6st\" (UID: \"8b61b10a-a445-4cc8-9794-aa92970799a6\") " pod="kube-system/kube-proxy-pj6st" Oct 31 20:56:04.757015 kubelet[2732]: I1031 20:56:04.756974 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b61b10a-a445-4cc8-9794-aa92970799a6-kube-proxy\") pod \"kube-proxy-pj6st\" (UID: \"8b61b10a-a445-4cc8-9794-aa92970799a6\") " pod="kube-system/kube-proxy-pj6st" Oct 31 20:56:04.757015 kubelet[2732]: I1031 20:56:04.756991 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jk9z\" (UniqueName: \"kubernetes.io/projected/8b61b10a-a445-4cc8-9794-aa92970799a6-kube-api-access-8jk9z\") pod \"kube-proxy-pj6st\" (UID: \"8b61b10a-a445-4cc8-9794-aa92970799a6\") " pod="kube-system/kube-proxy-pj6st" Oct 31 20:56:04.865547 systemd[1]: Created slice kubepods-besteffort-poddbaef4b5_40d3_414c_9c5d_be3b94ad50c5.slice - libcontainer container kubepods-besteffort-poddbaef4b5_40d3_414c_9c5d_be3b94ad50c5.slice. Oct 31 20:56:04.958957 kubelet[2732]: I1031 20:56:04.958910 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dbaef4b5-40d3-414c-9c5d-be3b94ad50c5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-p4lnh\" (UID: \"dbaef4b5-40d3-414c-9c5d-be3b94ad50c5\") " pod="tigera-operator/tigera-operator-7dcd859c48-p4lnh" Oct 31 20:56:04.958957 kubelet[2732]: I1031 20:56:04.958952 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r568s\" (UniqueName: \"kubernetes.io/projected/dbaef4b5-40d3-414c-9c5d-be3b94ad50c5-kube-api-access-r568s\") pod \"tigera-operator-7dcd859c48-p4lnh\" (UID: \"dbaef4b5-40d3-414c-9c5d-be3b94ad50c5\") " pod="tigera-operator/tigera-operator-7dcd859c48-p4lnh" Oct 31 20:56:05.050644 kubelet[2732]: E1031 20:56:05.050599 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:05.051315 containerd[1586]: time="2025-10-31T20:56:05.051281094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pj6st,Uid:8b61b10a-a445-4cc8-9794-aa92970799a6,Namespace:kube-system,Attempt:0,}" Oct 31 20:56:05.073243 containerd[1586]: time="2025-10-31T20:56:05.073203338Z" level=info msg="connecting to shim 398de8c0dced49c2f3265b83ad4f672d74e7f8d0062e034328030313053f6337" address="unix:///run/containerd/s/f66361546428cbbf0ec01162f4e9d536cb1f80a98561a0debc59d09a16838b5a" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:05.092307 systemd[1]: Started cri-containerd-398de8c0dced49c2f3265b83ad4f672d74e7f8d0062e034328030313053f6337.scope - libcontainer container 398de8c0dced49c2f3265b83ad4f672d74e7f8d0062e034328030313053f6337. Oct 31 20:56:05.114489 containerd[1586]: time="2025-10-31T20:56:05.114449099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pj6st,Uid:8b61b10a-a445-4cc8-9794-aa92970799a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"398de8c0dced49c2f3265b83ad4f672d74e7f8d0062e034328030313053f6337\"" Oct 31 20:56:05.115385 kubelet[2732]: E1031 20:56:05.115332 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:05.118347 containerd[1586]: time="2025-10-31T20:56:05.118316717Z" level=info msg="CreateContainer within sandbox \"398de8c0dced49c2f3265b83ad4f672d74e7f8d0062e034328030313053f6337\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 20:56:05.137842 containerd[1586]: time="2025-10-31T20:56:05.137200693Z" level=info msg="Container 135ec366db32ae7bebc089b193c8430352542d958a9ae5f9b01b786901c52d0a: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:56:05.146010 containerd[1586]: time="2025-10-31T20:56:05.145894296Z" level=info msg="CreateContainer within sandbox \"398de8c0dced49c2f3265b83ad4f672d74e7f8d0062e034328030313053f6337\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"135ec366db32ae7bebc089b193c8430352542d958a9ae5f9b01b786901c52d0a\"" Oct 31 20:56:05.146631 containerd[1586]: time="2025-10-31T20:56:05.146593790Z" level=info msg="StartContainer for \"135ec366db32ae7bebc089b193c8430352542d958a9ae5f9b01b786901c52d0a\"" Oct 31 20:56:05.148157 containerd[1586]: time="2025-10-31T20:56:05.148130682Z" level=info msg="connecting to shim 135ec366db32ae7bebc089b193c8430352542d958a9ae5f9b01b786901c52d0a" address="unix:///run/containerd/s/f66361546428cbbf0ec01162f4e9d536cb1f80a98561a0debc59d09a16838b5a" protocol=ttrpc version=3 Oct 31 20:56:05.166268 systemd[1]: Started cri-containerd-135ec366db32ae7bebc089b193c8430352542d958a9ae5f9b01b786901c52d0a.scope - libcontainer container 135ec366db32ae7bebc089b193c8430352542d958a9ae5f9b01b786901c52d0a. Oct 31 20:56:05.169296 containerd[1586]: time="2025-10-31T20:56:05.169256170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p4lnh,Uid:dbaef4b5-40d3-414c-9c5d-be3b94ad50c5,Namespace:tigera-operator,Attempt:0,}" Oct 31 20:56:05.192355 containerd[1586]: time="2025-10-31T20:56:05.192273803Z" level=info msg="connecting to shim fa851d9c0a1b98493b430e677b4fd3e77ee36f4b7ec51b59a8e713b345af4f0e" address="unix:///run/containerd/s/c59252c1061194347325f07955c437422aaaf7d9ab428d2c25c035e28459d442" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:05.210544 containerd[1586]: time="2025-10-31T20:56:05.210433779Z" level=info msg="StartContainer for \"135ec366db32ae7bebc089b193c8430352542d958a9ae5f9b01b786901c52d0a\" returns successfully" Oct 31 20:56:05.232292 systemd[1]: Started cri-containerd-fa851d9c0a1b98493b430e677b4fd3e77ee36f4b7ec51b59a8e713b345af4f0e.scope - libcontainer container fa851d9c0a1b98493b430e677b4fd3e77ee36f4b7ec51b59a8e713b345af4f0e. Oct 31 20:56:05.268480 containerd[1586]: time="2025-10-31T20:56:05.268424832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p4lnh,Uid:dbaef4b5-40d3-414c-9c5d-be3b94ad50c5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fa851d9c0a1b98493b430e677b4fd3e77ee36f4b7ec51b59a8e713b345af4f0e\"" Oct 31 20:56:05.272264 containerd[1586]: time="2025-10-31T20:56:05.271692541Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 20:56:05.573414 kubelet[2732]: E1031 20:56:05.573379 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:05.873926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980204000.mount: Deactivated successfully. Oct 31 20:56:07.943606 kubelet[2732]: E1031 20:56:07.943555 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:07.958910 kubelet[2732]: I1031 20:56:07.958859 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pj6st" podStartSLOduration=3.958831884 podStartE2EDuration="3.958831884s" podCreationTimestamp="2025-10-31 20:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 20:56:05.584709241 +0000 UTC m=+6.138007537" watchObservedRunningTime="2025-10-31 20:56:07.958831884 +0000 UTC m=+8.512130140" Oct 31 20:56:08.325590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795087213.mount: Deactivated successfully. Oct 31 20:56:08.578514 kubelet[2732]: E1031 20:56:08.578285 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:08.636906 containerd[1586]: time="2025-10-31T20:56:08.636850862Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:08.637529 containerd[1586]: time="2025-10-31T20:56:08.637483609Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Oct 31 20:56:08.638293 containerd[1586]: time="2025-10-31T20:56:08.638228693Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:08.649262 containerd[1586]: time="2025-10-31T20:56:08.649226644Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:08.650076 containerd[1586]: time="2025-10-31T20:56:08.650043070Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.377675223s" Oct 31 20:56:08.650188 containerd[1586]: time="2025-10-31T20:56:08.650172582Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 31 20:56:08.654175 containerd[1586]: time="2025-10-31T20:56:08.654143096Z" level=info msg="CreateContainer within sandbox \"fa851d9c0a1b98493b430e677b4fd3e77ee36f4b7ec51b59a8e713b345af4f0e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 20:56:08.662123 containerd[1586]: time="2025-10-31T20:56:08.662078799Z" level=info msg="Container 92583df564d7f3f6dcd5ddec2788dd6dd2340669ad850f9e0942c5fc4f13cf08: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:56:08.685483 containerd[1586]: time="2025-10-31T20:56:08.685422306Z" level=info msg="CreateContainer within sandbox \"fa851d9c0a1b98493b430e677b4fd3e77ee36f4b7ec51b59a8e713b345af4f0e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"92583df564d7f3f6dcd5ddec2788dd6dd2340669ad850f9e0942c5fc4f13cf08\"" Oct 31 20:56:08.685963 containerd[1586]: time="2025-10-31T20:56:08.685938152Z" level=info msg="StartContainer for \"92583df564d7f3f6dcd5ddec2788dd6dd2340669ad850f9e0942c5fc4f13cf08\"" Oct 31 20:56:08.687050 containerd[1586]: time="2025-10-31T20:56:08.687019967Z" level=info msg="connecting to shim 92583df564d7f3f6dcd5ddec2788dd6dd2340669ad850f9e0942c5fc4f13cf08" address="unix:///run/containerd/s/c59252c1061194347325f07955c437422aaaf7d9ab428d2c25c035e28459d442" protocol=ttrpc version=3 Oct 31 20:56:08.730259 systemd[1]: Started cri-containerd-92583df564d7f3f6dcd5ddec2788dd6dd2340669ad850f9e0942c5fc4f13cf08.scope - libcontainer container 92583df564d7f3f6dcd5ddec2788dd6dd2340669ad850f9e0942c5fc4f13cf08. Oct 31 20:56:08.756683 containerd[1586]: time="2025-10-31T20:56:08.756577080Z" level=info msg="StartContainer for \"92583df564d7f3f6dcd5ddec2788dd6dd2340669ad850f9e0942c5fc4f13cf08\" returns successfully" Oct 31 20:56:09.568567 kubelet[2732]: E1031 20:56:09.568525 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:09.584106 kubelet[2732]: E1031 20:56:09.583574 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:09.608228 kubelet[2732]: I1031 20:56:09.608054 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-p4lnh" podStartSLOduration=2.226290472 podStartE2EDuration="5.608039627s" podCreationTimestamp="2025-10-31 20:56:04 +0000 UTC" firstStartedPulling="2025-10-31 20:56:05.270143635 +0000 UTC m=+5.823441891" lastFinishedPulling="2025-10-31 20:56:08.65189279 +0000 UTC m=+9.205191046" observedRunningTime="2025-10-31 20:56:09.608017849 +0000 UTC m=+10.161316065" watchObservedRunningTime="2025-10-31 20:56:09.608039627 +0000 UTC m=+10.161337883" Oct 31 20:56:10.584855 kubelet[2732]: E1031 20:56:10.584819 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:11.982653 kubelet[2732]: E1031 20:56:11.982612 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:12.588389 kubelet[2732]: E1031 20:56:12.588352 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:13.842120 sudo[1808]: pam_unix(sudo:session): session closed for user root Oct 31 20:56:13.845144 sshd[1807]: Connection closed by 10.0.0.1 port 32962 Oct 31 20:56:13.844684 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:13.848962 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:32962.service: Deactivated successfully. Oct 31 20:56:13.852884 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 20:56:13.853073 systemd[1]: session-7.scope: Consumed 6.369s CPU time, 213.1M memory peak. Oct 31 20:56:13.854285 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. Oct 31 20:56:13.857268 systemd-logind[1572]: Removed session 7. Oct 31 20:56:16.545273 update_engine[1574]: I20251031 20:56:16.545210 1574 update_attempter.cc:509] Updating boot flags... Oct 31 20:56:21.458389 systemd[1]: Created slice kubepods-besteffort-podc766a8a1_14bb_49e7_a059_5e8429d28f17.slice - libcontainer container kubepods-besteffort-podc766a8a1_14bb_49e7_a059_5e8429d28f17.slice. Oct 31 20:56:21.570487 kubelet[2732]: I1031 20:56:21.570375 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c766a8a1-14bb-49e7-a059-5e8429d28f17-tigera-ca-bundle\") pod \"calico-typha-7c864b8f56-55x8l\" (UID: \"c766a8a1-14bb-49e7-a059-5e8429d28f17\") " pod="calico-system/calico-typha-7c864b8f56-55x8l" Oct 31 20:56:21.570487 kubelet[2732]: I1031 20:56:21.570419 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdfzn\" (UniqueName: \"kubernetes.io/projected/c766a8a1-14bb-49e7-a059-5e8429d28f17-kube-api-access-sdfzn\") pod \"calico-typha-7c864b8f56-55x8l\" (UID: \"c766a8a1-14bb-49e7-a059-5e8429d28f17\") " pod="calico-system/calico-typha-7c864b8f56-55x8l" Oct 31 20:56:21.570487 kubelet[2732]: I1031 20:56:21.570439 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c766a8a1-14bb-49e7-a059-5e8429d28f17-typha-certs\") pod \"calico-typha-7c864b8f56-55x8l\" (UID: \"c766a8a1-14bb-49e7-a059-5e8429d28f17\") " pod="calico-system/calico-typha-7c864b8f56-55x8l" Oct 31 20:56:21.631079 systemd[1]: Created slice kubepods-besteffort-pod35b9b2bc_1b0f_46b6_b147_eb691873e641.slice - libcontainer container kubepods-besteffort-pod35b9b2bc_1b0f_46b6_b147_eb691873e641.slice. Oct 31 20:56:21.671294 kubelet[2732]: I1031 20:56:21.671260 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/35b9b2bc-1b0f-46b6-b147-eb691873e641-cni-log-dir\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671294 kubelet[2732]: I1031 20:56:21.671296 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/35b9b2bc-1b0f-46b6-b147-eb691873e641-cni-net-dir\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671440 kubelet[2732]: I1031 20:56:21.671321 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/35b9b2bc-1b0f-46b6-b147-eb691873e641-cni-bin-dir\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671440 kubelet[2732]: I1031 20:56:21.671337 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/35b9b2bc-1b0f-46b6-b147-eb691873e641-node-certs\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671440 kubelet[2732]: I1031 20:56:21.671352 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/35b9b2bc-1b0f-46b6-b147-eb691873e641-policysync\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671440 kubelet[2732]: I1031 20:56:21.671366 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35b9b2bc-1b0f-46b6-b147-eb691873e641-tigera-ca-bundle\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671440 kubelet[2732]: I1031 20:56:21.671381 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35b9b2bc-1b0f-46b6-b147-eb691873e641-xtables-lock\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671553 kubelet[2732]: I1031 20:56:21.671394 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfszx\" (UniqueName: \"kubernetes.io/projected/35b9b2bc-1b0f-46b6-b147-eb691873e641-kube-api-access-vfszx\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671553 kubelet[2732]: I1031 20:56:21.671425 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35b9b2bc-1b0f-46b6-b147-eb691873e641-lib-modules\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671553 kubelet[2732]: I1031 20:56:21.671439 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/35b9b2bc-1b0f-46b6-b147-eb691873e641-var-run-calico\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671553 kubelet[2732]: I1031 20:56:21.671455 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/35b9b2bc-1b0f-46b6-b147-eb691873e641-var-lib-calico\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.671553 kubelet[2732]: I1031 20:56:21.671474 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/35b9b2bc-1b0f-46b6-b147-eb691873e641-flexvol-driver-host\") pod \"calico-node-4fhwh\" (UID: \"35b9b2bc-1b0f-46b6-b147-eb691873e641\") " pod="calico-system/calico-node-4fhwh" Oct 31 20:56:21.762196 kubelet[2732]: E1031 20:56:21.762060 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:21.763651 containerd[1586]: time="2025-10-31T20:56:21.763602359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c864b8f56-55x8l,Uid:c766a8a1-14bb-49e7-a059-5e8429d28f17,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:21.773872 kubelet[2732]: E1031 20:56:21.773715 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.773872 kubelet[2732]: W1031 20:56:21.773738 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.774737 kubelet[2732]: E1031 20:56:21.774692 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.774737 kubelet[2732]: W1031 20:56:21.774710 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.777222 kubelet[2732]: E1031 20:56:21.777170 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.778436 kubelet[2732]: E1031 20:56:21.778024 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.778436 kubelet[2732]: E1031 20:56:21.778203 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.778436 kubelet[2732]: W1031 20:56:21.778215 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.778436 kubelet[2732]: E1031 20:56:21.778229 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.788041 kubelet[2732]: E1031 20:56:21.786852 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.788041 kubelet[2732]: W1031 20:56:21.786873 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.788041 kubelet[2732]: E1031 20:56:21.786890 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.819677 containerd[1586]: time="2025-10-31T20:56:21.819181740Z" level=info msg="connecting to shim 5e41856c5a13af041377b3c1496649df6eb7b7296cbc4ef32bb49205cb71b8ed" address="unix:///run/containerd/s/80ce82b5b5487f5647dc38ad8a89d6b003aea315ad63ab4ccab4dca2a25bc79f" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:21.833297 kubelet[2732]: E1031 20:56:21.833250 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:56:21.847904 kubelet[2732]: E1031 20:56:21.847759 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.847904 kubelet[2732]: W1031 20:56:21.847786 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.854386 systemd[1]: Started cri-containerd-5e41856c5a13af041377b3c1496649df6eb7b7296cbc4ef32bb49205cb71b8ed.scope - libcontainer container 5e41856c5a13af041377b3c1496649df6eb7b7296cbc4ef32bb49205cb71b8ed. Oct 31 20:56:21.855587 kubelet[2732]: E1031 20:56:21.855404 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.855714 kubelet[2732]: E1031 20:56:21.855699 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.855810 kubelet[2732]: W1031 20:56:21.855767 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.856215 kubelet[2732]: E1031 20:56:21.856195 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.856490 kubelet[2732]: E1031 20:56:21.856475 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.856566 kubelet[2732]: W1031 20:56:21.856553 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.856621 kubelet[2732]: E1031 20:56:21.856611 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.857436 kubelet[2732]: E1031 20:56:21.857258 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.857655 kubelet[2732]: W1031 20:56:21.857622 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.857740 kubelet[2732]: E1031 20:56:21.857719 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.858116 kubelet[2732]: E1031 20:56:21.858083 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.858295 kubelet[2732]: W1031 20:56:21.858179 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.858295 kubelet[2732]: E1031 20:56:21.858198 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.858504 kubelet[2732]: E1031 20:56:21.858488 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.858567 kubelet[2732]: W1031 20:56:21.858555 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.858642 kubelet[2732]: E1031 20:56:21.858620 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.858939 kubelet[2732]: E1031 20:56:21.858921 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.859123 kubelet[2732]: W1031 20:56:21.859004 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.859123 kubelet[2732]: E1031 20:56:21.859023 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.859383 kubelet[2732]: E1031 20:56:21.859368 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.859492 kubelet[2732]: W1031 20:56:21.859478 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.859555 kubelet[2732]: E1031 20:56:21.859544 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.859844 kubelet[2732]: E1031 20:56:21.859810 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.860125 kubelet[2732]: W1031 20:56:21.859951 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.860125 kubelet[2732]: E1031 20:56:21.859972 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.860432 kubelet[2732]: E1031 20:56:21.860374 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.860640 kubelet[2732]: W1031 20:56:21.860594 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.860904 kubelet[2732]: E1031 20:56:21.860885 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.861539 kubelet[2732]: E1031 20:56:21.861523 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.861614 kubelet[2732]: W1031 20:56:21.861602 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.862056 kubelet[2732]: E1031 20:56:21.861665 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.862494 kubelet[2732]: E1031 20:56:21.862391 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.862494 kubelet[2732]: W1031 20:56:21.862404 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.862494 kubelet[2732]: E1031 20:56:21.862415 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.862642 kubelet[2732]: E1031 20:56:21.862630 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.862746 kubelet[2732]: W1031 20:56:21.862733 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.862848 kubelet[2732]: E1031 20:56:21.862791 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.863202 kubelet[2732]: E1031 20:56:21.863184 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.863366 kubelet[2732]: W1031 20:56:21.863352 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.863494 kubelet[2732]: E1031 20:56:21.863424 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.863761 kubelet[2732]: E1031 20:56:21.863747 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.863825 kubelet[2732]: W1031 20:56:21.863814 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.863884 kubelet[2732]: E1031 20:56:21.863873 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.864148 kubelet[2732]: E1031 20:56:21.864134 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.864206 kubelet[2732]: W1031 20:56:21.864195 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.864266 kubelet[2732]: E1031 20:56:21.864255 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.864543 kubelet[2732]: E1031 20:56:21.864529 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.864615 kubelet[2732]: W1031 20:56:21.864604 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.864670 kubelet[2732]: E1031 20:56:21.864660 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.864888 kubelet[2732]: E1031 20:56:21.864877 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.864958 kubelet[2732]: W1031 20:56:21.864946 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.865133 kubelet[2732]: E1031 20:56:21.865114 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.865687 kubelet[2732]: E1031 20:56:21.865670 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.865981 kubelet[2732]: W1031 20:56:21.865870 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.865981 kubelet[2732]: E1031 20:56:21.865890 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.866174 kubelet[2732]: E1031 20:56:21.866154 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.866174 kubelet[2732]: W1031 20:56:21.866173 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.866225 kubelet[2732]: E1031 20:56:21.866190 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.873013 kubelet[2732]: E1031 20:56:21.872974 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.873013 kubelet[2732]: W1031 20:56:21.872994 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.873013 kubelet[2732]: E1031 20:56:21.873014 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.873139 kubelet[2732]: I1031 20:56:21.873044 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/85524551-e531-4ebd-be44-e40fd94305ba-varrun\") pod \"csi-node-driver-fzzpl\" (UID: \"85524551-e531-4ebd-be44-e40fd94305ba\") " pod="calico-system/csi-node-driver-fzzpl" Oct 31 20:56:21.873265 kubelet[2732]: E1031 20:56:21.873237 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.873265 kubelet[2732]: W1031 20:56:21.873250 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.873357 kubelet[2732]: E1031 20:56:21.873268 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.873395 kubelet[2732]: I1031 20:56:21.873368 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85524551-e531-4ebd-be44-e40fd94305ba-kubelet-dir\") pod \"csi-node-driver-fzzpl\" (UID: \"85524551-e531-4ebd-be44-e40fd94305ba\") " pod="calico-system/csi-node-driver-fzzpl" Oct 31 20:56:21.873429 kubelet[2732]: E1031 20:56:21.873419 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.873459 kubelet[2732]: W1031 20:56:21.873430 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.873479 kubelet[2732]: E1031 20:56:21.873467 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.873643 kubelet[2732]: E1031 20:56:21.873630 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.873643 kubelet[2732]: W1031 20:56:21.873641 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.873691 kubelet[2732]: E1031 20:56:21.873653 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.873813 kubelet[2732]: E1031 20:56:21.873800 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.873813 kubelet[2732]: W1031 20:56:21.873811 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.873867 kubelet[2732]: E1031 20:56:21.873828 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.873979 kubelet[2732]: E1031 20:56:21.873966 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.873979 kubelet[2732]: W1031 20:56:21.873977 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.874028 kubelet[2732]: E1031 20:56:21.873988 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.874244 kubelet[2732]: E1031 20:56:21.874229 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.874244 kubelet[2732]: W1031 20:56:21.874240 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.874308 kubelet[2732]: E1031 20:56:21.874248 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.874308 kubelet[2732]: I1031 20:56:21.874267 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kjzq\" (UniqueName: \"kubernetes.io/projected/85524551-e531-4ebd-be44-e40fd94305ba-kube-api-access-7kjzq\") pod \"csi-node-driver-fzzpl\" (UID: \"85524551-e531-4ebd-be44-e40fd94305ba\") " pod="calico-system/csi-node-driver-fzzpl" Oct 31 20:56:21.874426 kubelet[2732]: E1031 20:56:21.874400 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.874426 kubelet[2732]: W1031 20:56:21.874422 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.874468 kubelet[2732]: E1031 20:56:21.874437 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.874468 kubelet[2732]: I1031 20:56:21.874451 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85524551-e531-4ebd-be44-e40fd94305ba-registration-dir\") pod \"csi-node-driver-fzzpl\" (UID: \"85524551-e531-4ebd-be44-e40fd94305ba\") " pod="calico-system/csi-node-driver-fzzpl" Oct 31 20:56:21.874621 kubelet[2732]: E1031 20:56:21.874607 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.874621 kubelet[2732]: W1031 20:56:21.874619 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.874672 kubelet[2732]: E1031 20:56:21.874642 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.874672 kubelet[2732]: I1031 20:56:21.874656 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85524551-e531-4ebd-be44-e40fd94305ba-socket-dir\") pod \"csi-node-driver-fzzpl\" (UID: \"85524551-e531-4ebd-be44-e40fd94305ba\") " pod="calico-system/csi-node-driver-fzzpl" Oct 31 20:56:21.874845 kubelet[2732]: E1031 20:56:21.874830 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.874845 kubelet[2732]: W1031 20:56:21.874843 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.874899 kubelet[2732]: E1031 20:56:21.874863 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.875009 kubelet[2732]: E1031 20:56:21.874990 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.875009 kubelet[2732]: W1031 20:56:21.875007 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.875061 kubelet[2732]: E1031 20:56:21.875024 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.875202 kubelet[2732]: E1031 20:56:21.875189 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.875202 kubelet[2732]: W1031 20:56:21.875200 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.875337 kubelet[2732]: E1031 20:56:21.875215 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.875414 kubelet[2732]: E1031 20:56:21.875402 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.875414 kubelet[2732]: W1031 20:56:21.875413 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.875462 kubelet[2732]: E1031 20:56:21.875428 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.875574 kubelet[2732]: E1031 20:56:21.875563 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.875574 kubelet[2732]: W1031 20:56:21.875573 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.875625 kubelet[2732]: E1031 20:56:21.875580 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.875737 kubelet[2732]: E1031 20:56:21.875727 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.875737 kubelet[2732]: W1031 20:56:21.875736 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.875782 kubelet[2732]: E1031 20:56:21.875744 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.893342 containerd[1586]: time="2025-10-31T20:56:21.893304334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c864b8f56-55x8l,Uid:c766a8a1-14bb-49e7-a059-5e8429d28f17,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e41856c5a13af041377b3c1496649df6eb7b7296cbc4ef32bb49205cb71b8ed\"" Oct 31 20:56:21.898879 kubelet[2732]: E1031 20:56:21.898857 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:21.904982 containerd[1586]: time="2025-10-31T20:56:21.904948327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 20:56:21.940236 kubelet[2732]: E1031 20:56:21.940197 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:21.940688 containerd[1586]: time="2025-10-31T20:56:21.940657319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4fhwh,Uid:35b9b2bc-1b0f-46b6-b147-eb691873e641,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:21.968355 containerd[1586]: time="2025-10-31T20:56:21.968236671Z" level=info msg="connecting to shim 96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e" address="unix:///run/containerd/s/bdf3512b48cd83297b2d3b3f7d346e1869401ce7eccb78f0d53d43c9b689a3cb" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:21.977122 kubelet[2732]: E1031 20:56:21.976425 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.977122 kubelet[2732]: W1031 20:56:21.976446 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.977122 kubelet[2732]: E1031 20:56:21.976466 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.977122 kubelet[2732]: E1031 20:56:21.976644 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.977122 kubelet[2732]: W1031 20:56:21.976652 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.977122 kubelet[2732]: E1031 20:56:21.976667 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.977122 kubelet[2732]: E1031 20:56:21.976913 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.977122 kubelet[2732]: W1031 20:56:21.976927 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.977122 kubelet[2732]: E1031 20:56:21.976945 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.977122 kubelet[2732]: E1031 20:56:21.977098 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.977393 kubelet[2732]: W1031 20:56:21.977106 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.977393 kubelet[2732]: E1031 20:56:21.977121 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.978021 kubelet[2732]: E1031 20:56:21.977453 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.978021 kubelet[2732]: W1031 20:56:21.977468 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.978021 kubelet[2732]: E1031 20:56:21.977484 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.978021 kubelet[2732]: E1031 20:56:21.977692 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.978021 kubelet[2732]: W1031 20:56:21.977699 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.978021 kubelet[2732]: E1031 20:56:21.977708 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.978021 kubelet[2732]: E1031 20:56:21.978004 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.978021 kubelet[2732]: W1031 20:56:21.978016 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.978276 kubelet[2732]: E1031 20:56:21.978032 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.978276 kubelet[2732]: E1031 20:56:21.978230 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.978276 kubelet[2732]: W1031 20:56:21.978243 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.978276 kubelet[2732]: E1031 20:56:21.978256 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.978434 kubelet[2732]: E1031 20:56:21.978402 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.978434 kubelet[2732]: W1031 20:56:21.978422 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.978434 kubelet[2732]: E1031 20:56:21.978432 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.978790 kubelet[2732]: E1031 20:56:21.978770 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.978790 kubelet[2732]: W1031 20:56:21.978787 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.978862 kubelet[2732]: E1031 20:56:21.978804 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.979284 kubelet[2732]: E1031 20:56:21.979266 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.979284 kubelet[2732]: W1031 20:56:21.979283 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.979349 kubelet[2732]: E1031 20:56:21.979300 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.979989 kubelet[2732]: E1031 20:56:21.979922 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.979989 kubelet[2732]: W1031 20:56:21.979939 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.979989 kubelet[2732]: E1031 20:56:21.979956 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.980465 kubelet[2732]: E1031 20:56:21.980446 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.980465 kubelet[2732]: W1031 20:56:21.980462 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.980579 kubelet[2732]: E1031 20:56:21.980537 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.981278 kubelet[2732]: E1031 20:56:21.981223 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.981278 kubelet[2732]: W1031 20:56:21.981241 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.981436 kubelet[2732]: E1031 20:56:21.981412 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.981844 kubelet[2732]: E1031 20:56:21.981645 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.981844 kubelet[2732]: W1031 20:56:21.981658 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.983150 kubelet[2732]: E1031 20:56:21.983081 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.983150 kubelet[2732]: W1031 20:56:21.983117 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.983250 kubelet[2732]: E1031 20:56:21.983234 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.983281 kubelet[2732]: E1031 20:56:21.983257 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.983339 kubelet[2732]: E1031 20:56:21.983324 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.983339 kubelet[2732]: W1031 20:56:21.983335 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.983512 kubelet[2732]: E1031 20:56:21.983370 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.983563 kubelet[2732]: E1031 20:56:21.983526 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.983563 kubelet[2732]: W1031 20:56:21.983534 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.983563 kubelet[2732]: E1031 20:56:21.983543 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.984245 kubelet[2732]: E1031 20:56:21.984223 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.984245 kubelet[2732]: W1031 20:56:21.984242 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.984329 kubelet[2732]: E1031 20:56:21.984261 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.985396 kubelet[2732]: E1031 20:56:21.985056 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.985396 kubelet[2732]: W1031 20:56:21.985395 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.985668 kubelet[2732]: E1031 20:56:21.985644 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.986165 kubelet[2732]: E1031 20:56:21.986147 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.986246 kubelet[2732]: W1031 20:56:21.986233 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.986488 kubelet[2732]: E1031 20:56:21.986465 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.986781 kubelet[2732]: E1031 20:56:21.986578 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.986781 kubelet[2732]: W1031 20:56:21.986588 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.986781 kubelet[2732]: E1031 20:56:21.986634 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.987242 kubelet[2732]: E1031 20:56:21.987108 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.987495 kubelet[2732]: W1031 20:56:21.987357 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.987495 kubelet[2732]: E1031 20:56:21.987394 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.987905 kubelet[2732]: E1031 20:56:21.987758 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.987905 kubelet[2732]: W1031 20:56:21.987771 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.987905 kubelet[2732]: E1031 20:56:21.987783 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.988063 kubelet[2732]: E1031 20:56:21.988042 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.988142 kubelet[2732]: W1031 20:56:21.988129 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.988195 kubelet[2732]: E1031 20:56:21.988184 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.992188 kubelet[2732]: E1031 20:56:21.992171 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:21.992188 kubelet[2732]: W1031 20:56:21.992186 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:21.992260 kubelet[2732]: E1031 20:56:21.992198 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:21.999536 systemd[1]: Started cri-containerd-96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e.scope - libcontainer container 96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e. Oct 31 20:56:22.036288 containerd[1586]: time="2025-10-31T20:56:22.036183416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4fhwh,Uid:35b9b2bc-1b0f-46b6-b147-eb691873e641,Namespace:calico-system,Attempt:0,} returns sandbox id \"96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e\"" Oct 31 20:56:22.037169 kubelet[2732]: E1031 20:56:22.037139 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:22.880569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount635967504.mount: Deactivated successfully. Oct 31 20:56:23.367861 containerd[1586]: time="2025-10-31T20:56:23.367804404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:23.368697 containerd[1586]: time="2025-10-31T20:56:23.368609348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Oct 31 20:56:23.369366 containerd[1586]: time="2025-10-31T20:56:23.369338148Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:23.371238 containerd[1586]: time="2025-10-31T20:56:23.371199199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:23.372114 containerd[1586]: time="2025-10-31T20:56:23.372050639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.46693537s" Oct 31 20:56:23.372114 containerd[1586]: time="2025-10-31T20:56:23.372085291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 31 20:56:23.374716 containerd[1586]: time="2025-10-31T20:56:23.374671100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 20:56:23.392512 containerd[1586]: time="2025-10-31T20:56:23.392233072Z" level=info msg="CreateContainer within sandbox \"5e41856c5a13af041377b3c1496649df6eb7b7296cbc4ef32bb49205cb71b8ed\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 20:56:23.401541 containerd[1586]: time="2025-10-31T20:56:23.401504599Z" level=info msg="Container c2476382d12dc438347d2b190bbe4caaafb531c23e2583103859510b4b3b90cd: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:56:23.408480 containerd[1586]: time="2025-10-31T20:56:23.408449081Z" level=info msg="CreateContainer within sandbox \"5e41856c5a13af041377b3c1496649df6eb7b7296cbc4ef32bb49205cb71b8ed\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c2476382d12dc438347d2b190bbe4caaafb531c23e2583103859510b4b3b90cd\"" Oct 31 20:56:23.409003 containerd[1586]: time="2025-10-31T20:56:23.408978335Z" level=info msg="StartContainer for \"c2476382d12dc438347d2b190bbe4caaafb531c23e2583103859510b4b3b90cd\"" Oct 31 20:56:23.409988 containerd[1586]: time="2025-10-31T20:56:23.409964539Z" level=info msg="connecting to shim c2476382d12dc438347d2b190bbe4caaafb531c23e2583103859510b4b3b90cd" address="unix:///run/containerd/s/80ce82b5b5487f5647dc38ad8a89d6b003aea315ad63ab4ccab4dca2a25bc79f" protocol=ttrpc version=3 Oct 31 20:56:23.434252 systemd[1]: Started cri-containerd-c2476382d12dc438347d2b190bbe4caaafb531c23e2583103859510b4b3b90cd.scope - libcontainer container c2476382d12dc438347d2b190bbe4caaafb531c23e2583103859510b4b3b90cd. Oct 31 20:56:23.487986 containerd[1586]: time="2025-10-31T20:56:23.487952528Z" level=info msg="StartContainer for \"c2476382d12dc438347d2b190bbe4caaafb531c23e2583103859510b4b3b90cd\" returns successfully" Oct 31 20:56:23.534112 kubelet[2732]: E1031 20:56:23.533848 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:56:23.626748 kubelet[2732]: E1031 20:56:23.626521 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:23.642927 kubelet[2732]: I1031 20:56:23.641906 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c864b8f56-55x8l" podStartSLOduration=1.168166168 podStartE2EDuration="2.641889557s" podCreationTimestamp="2025-10-31 20:56:21 +0000 UTC" firstStartedPulling="2025-10-31 20:56:21.900043934 +0000 UTC m=+22.453342150" lastFinishedPulling="2025-10-31 20:56:23.373767283 +0000 UTC m=+23.927065539" observedRunningTime="2025-10-31 20:56:23.641559769 +0000 UTC m=+24.194858025" watchObservedRunningTime="2025-10-31 20:56:23.641889557 +0000 UTC m=+24.195187813" Oct 31 20:56:23.681219 kubelet[2732]: E1031 20:56:23.681193 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.681336 kubelet[2732]: W1031 20:56:23.681319 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.681401 kubelet[2732]: E1031 20:56:23.681388 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.681727 kubelet[2732]: E1031 20:56:23.681625 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.681727 kubelet[2732]: W1031 20:56:23.681639 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.681727 kubelet[2732]: E1031 20:56:23.681650 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.681895 kubelet[2732]: E1031 20:56:23.681882 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.681951 kubelet[2732]: W1031 20:56:23.681941 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.681999 kubelet[2732]: E1031 20:56:23.681989 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.682387 kubelet[2732]: E1031 20:56:23.682374 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.682467 kubelet[2732]: W1031 20:56:23.682455 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.682602 kubelet[2732]: E1031 20:56:23.682584 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.682932 kubelet[2732]: E1031 20:56:23.682841 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.682932 kubelet[2732]: W1031 20:56:23.682853 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.682932 kubelet[2732]: E1031 20:56:23.682864 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.683074 kubelet[2732]: E1031 20:56:23.683062 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.683147 kubelet[2732]: W1031 20:56:23.683136 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.683196 kubelet[2732]: E1031 20:56:23.683186 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.683488 kubelet[2732]: E1031 20:56:23.683408 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.683488 kubelet[2732]: W1031 20:56:23.683419 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.683488 kubelet[2732]: E1031 20:56:23.683429 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.683769 kubelet[2732]: E1031 20:56:23.683750 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.683856 kubelet[2732]: W1031 20:56:23.683828 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.683915 kubelet[2732]: E1031 20:56:23.683904 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.684155 kubelet[2732]: E1031 20:56:23.684142 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.684223 kubelet[2732]: W1031 20:56:23.684210 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.684288 kubelet[2732]: E1031 20:56:23.684261 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.684578 kubelet[2732]: E1031 20:56:23.684492 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.684578 kubelet[2732]: W1031 20:56:23.684503 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.684578 kubelet[2732]: E1031 20:56:23.684512 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.684743 kubelet[2732]: E1031 20:56:23.684731 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.684797 kubelet[2732]: W1031 20:56:23.684787 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.684843 kubelet[2732]: E1031 20:56:23.684834 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.691111 kubelet[2732]: E1031 20:56:23.691079 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.691222 kubelet[2732]: W1031 20:56:23.691190 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.691222 kubelet[2732]: E1031 20:56:23.691207 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.691552 kubelet[2732]: E1031 20:56:23.691540 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.691644 kubelet[2732]: W1031 20:56:23.691606 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.691729 kubelet[2732]: E1031 20:56:23.691700 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.692002 kubelet[2732]: E1031 20:56:23.691991 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.692070 kubelet[2732]: W1031 20:56:23.692059 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.692148 kubelet[2732]: E1031 20:56:23.692136 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.692410 kubelet[2732]: E1031 20:56:23.692340 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.692410 kubelet[2732]: W1031 20:56:23.692352 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.692410 kubelet[2732]: E1031 20:56:23.692361 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.693226 kubelet[2732]: E1031 20:56:23.693205 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.693265 kubelet[2732]: W1031 20:56:23.693231 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.693265 kubelet[2732]: E1031 20:56:23.693245 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.693692 kubelet[2732]: E1031 20:56:23.693675 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.693692 kubelet[2732]: W1031 20:56:23.693690 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.693754 kubelet[2732]: E1031 20:56:23.693706 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.695424 kubelet[2732]: E1031 20:56:23.695376 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.695424 kubelet[2732]: W1031 20:56:23.695390 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.695769 kubelet[2732]: E1031 20:56:23.695615 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.696580 kubelet[2732]: E1031 20:56:23.696541 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.696580 kubelet[2732]: W1031 20:56:23.696556 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.696860 kubelet[2732]: E1031 20:56:23.696827 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.698179 kubelet[2732]: E1031 20:56:23.698165 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.698251 kubelet[2732]: W1031 20:56:23.698239 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.698305 kubelet[2732]: E1031 20:56:23.698296 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.698606 kubelet[2732]: E1031 20:56:23.698581 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.698606 kubelet[2732]: W1031 20:56:23.698593 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.698780 kubelet[2732]: E1031 20:56:23.698690 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.698906 kubelet[2732]: E1031 20:56:23.698878 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.698968 kubelet[2732]: W1031 20:56:23.698956 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.699056 kubelet[2732]: E1031 20:56:23.699046 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.699351 kubelet[2732]: E1031 20:56:23.699319 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.699468 kubelet[2732]: W1031 20:56:23.699383 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.699962 kubelet[2732]: E1031 20:56:23.699854 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.700568 kubelet[2732]: E1031 20:56:23.700443 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.700568 kubelet[2732]: W1031 20:56:23.700457 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.700568 kubelet[2732]: E1031 20:56:23.700490 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.701336 kubelet[2732]: E1031 20:56:23.701304 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.701336 kubelet[2732]: W1031 20:56:23.701320 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.701817 kubelet[2732]: E1031 20:56:23.701699 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.702472 kubelet[2732]: E1031 20:56:23.702223 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.702570 kubelet[2732]: W1031 20:56:23.702546 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.702934 kubelet[2732]: E1031 20:56:23.702888 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.703750 kubelet[2732]: E1031 20:56:23.703365 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.703750 kubelet[2732]: W1031 20:56:23.703480 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.703750 kubelet[2732]: E1031 20:56:23.703598 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.704257 kubelet[2732]: E1031 20:56:23.704171 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.704897 kubelet[2732]: W1031 20:56:23.704873 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.705060 kubelet[2732]: E1031 20:56:23.705048 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.706128 kubelet[2732]: E1031 20:56:23.706112 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.706208 kubelet[2732]: W1031 20:56:23.706196 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.708232 kubelet[2732]: E1031 20:56:23.708205 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.709062 kubelet[2732]: E1031 20:56:23.709043 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.709310 kubelet[2732]: W1031 20:56:23.709125 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.709529 kubelet[2732]: E1031 20:56:23.709514 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.710794 kubelet[2732]: W1031 20:56:23.710318 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.710794 kubelet[2732]: E1031 20:56:23.710344 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.710794 kubelet[2732]: E1031 20:56:23.710379 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.724431 kubelet[2732]: E1031 20:56:23.724397 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.724431 kubelet[2732]: W1031 20:56:23.724420 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.724537 kubelet[2732]: E1031 20:56:23.724438 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:23.724688 kubelet[2732]: E1031 20:56:23.724667 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 20:56:23.724688 kubelet[2732]: W1031 20:56:23.724683 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 20:56:23.724738 kubelet[2732]: E1031 20:56:23.724702 2732 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 20:56:24.550275 containerd[1586]: time="2025-10-31T20:56:24.550195222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:24.550855 containerd[1586]: time="2025-10-31T20:56:24.550788485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:24.551728 containerd[1586]: time="2025-10-31T20:56:24.551695364Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:24.553785 containerd[1586]: time="2025-10-31T20:56:24.553759400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:24.554466 containerd[1586]: time="2025-10-31T20:56:24.554430367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.179715412s" Oct 31 20:56:24.554466 containerd[1586]: time="2025-10-31T20:56:24.554465338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 31 20:56:24.556156 containerd[1586]: time="2025-10-31T20:56:24.556128930Z" level=info msg="CreateContainer within sandbox \"96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 20:56:24.571318 containerd[1586]: time="2025-10-31T20:56:24.571275837Z" level=info msg="Container bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:56:24.578133 containerd[1586]: time="2025-10-31T20:56:24.578072731Z" level=info msg="CreateContainer within sandbox \"96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab\"" Oct 31 20:56:24.578582 containerd[1586]: time="2025-10-31T20:56:24.578553039Z" level=info msg="StartContainer for \"bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab\"" Oct 31 20:56:24.580114 containerd[1586]: time="2025-10-31T20:56:24.580072828Z" level=info msg="connecting to shim bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab" address="unix:///run/containerd/s/bdf3512b48cd83297b2d3b3f7d346e1869401ce7eccb78f0d53d43c9b689a3cb" protocol=ttrpc version=3 Oct 31 20:56:24.603267 systemd[1]: Started cri-containerd-bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab.scope - libcontainer container bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab. Oct 31 20:56:24.633434 kubelet[2732]: I1031 20:56:24.633406 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 20:56:24.634357 kubelet[2732]: E1031 20:56:24.634292 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:24.638679 containerd[1586]: time="2025-10-31T20:56:24.638642913Z" level=info msg="StartContainer for \"bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab\" returns successfully" Oct 31 20:56:24.655312 systemd[1]: cri-containerd-bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab.scope: Deactivated successfully. Oct 31 20:56:24.675976 containerd[1586]: time="2025-10-31T20:56:24.675917677Z" level=info msg="received exit event container_id:\"bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab\" id:\"bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab\" pid:3437 exited_at:{seconds:1761944184 nanos:668595821}" Oct 31 20:56:24.722601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb02e64100f7ed412035ff8dba1f5229f0592ad79e38ff53535ad96ba4587cab-rootfs.mount: Deactivated successfully. Oct 31 20:56:25.533441 kubelet[2732]: E1031 20:56:25.533382 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:56:25.637599 kubelet[2732]: E1031 20:56:25.637554 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:25.639188 kubelet[2732]: E1031 20:56:25.637610 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:25.641907 containerd[1586]: time="2025-10-31T20:56:25.641871993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 20:56:26.639536 kubelet[2732]: E1031 20:56:26.639506 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:27.533813 kubelet[2732]: E1031 20:56:27.533754 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:56:28.458013 containerd[1586]: time="2025-10-31T20:56:28.457961698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:28.458708 containerd[1586]: time="2025-10-31T20:56:28.458640419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Oct 31 20:56:28.459335 containerd[1586]: time="2025-10-31T20:56:28.459303097Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:28.461226 containerd[1586]: time="2025-10-31T20:56:28.461165100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:28.462175 containerd[1586]: time="2025-10-31T20:56:28.462142253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.820224887s" Oct 31 20:56:28.462175 containerd[1586]: time="2025-10-31T20:56:28.462172980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 31 20:56:28.464374 containerd[1586]: time="2025-10-31T20:56:28.464322332Z" level=info msg="CreateContainer within sandbox \"96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 20:56:28.477303 containerd[1586]: time="2025-10-31T20:56:28.477253850Z" level=info msg="Container 096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:56:28.484999 containerd[1586]: time="2025-10-31T20:56:28.484950042Z" level=info msg="CreateContainer within sandbox \"96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12\"" Oct 31 20:56:28.485460 containerd[1586]: time="2025-10-31T20:56:28.485438158Z" level=info msg="StartContainer for \"096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12\"" Oct 31 20:56:28.491597 containerd[1586]: time="2025-10-31T20:56:28.491567217Z" level=info msg="connecting to shim 096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12" address="unix:///run/containerd/s/bdf3512b48cd83297b2d3b3f7d346e1869401ce7eccb78f0d53d43c9b689a3cb" protocol=ttrpc version=3 Oct 31 20:56:28.517312 systemd[1]: Started cri-containerd-096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12.scope - libcontainer container 096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12. Oct 31 20:56:28.555128 containerd[1586]: time="2025-10-31T20:56:28.554671316Z" level=info msg="StartContainer for \"096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12\" returns successfully" Oct 31 20:56:28.647155 kubelet[2732]: E1031 20:56:28.647113 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:29.148721 systemd[1]: cri-containerd-096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12.scope: Deactivated successfully. Oct 31 20:56:29.149039 systemd[1]: cri-containerd-096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12.scope: Consumed 458ms CPU time, 172.7M memory peak, 2.5M read from disk, 165.9M written to disk. Oct 31 20:56:29.155139 containerd[1586]: time="2025-10-31T20:56:29.153673580Z" level=info msg="received exit event container_id:\"096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12\" id:\"096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12\" pid:3499 exited_at:{seconds:1761944189 nanos:151509217}" Oct 31 20:56:29.172558 kubelet[2732]: I1031 20:56:29.172197 2732 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 20:56:29.185152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-096efbe5945540802920aab6414e6da707237183407fa2157c4e4c29491c3b12-rootfs.mount: Deactivated successfully. Oct 31 20:56:29.221052 systemd[1]: Created slice kubepods-burstable-podb5fcba99_ac61_4506_9ea4_62f848c483c1.slice - libcontainer container kubepods-burstable-podb5fcba99_ac61_4506_9ea4_62f848c483c1.slice. Oct 31 20:56:29.231730 systemd[1]: Created slice kubepods-burstable-pod4809a88d_1206_463c_b992_8852b18c726f.slice - libcontainer container kubepods-burstable-pod4809a88d_1206_463c_b992_8852b18c726f.slice. Oct 31 20:56:29.235546 systemd[1]: Created slice kubepods-besteffort-podd8950dfd_888a_4512_a9c0_edda8417ecdd.slice - libcontainer container kubepods-besteffort-podd8950dfd_888a_4512_a9c0_edda8417ecdd.slice. Oct 31 20:56:29.237294 kubelet[2732]: I1031 20:56:29.237222 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74e39d71-e729-442a-ad78-d80f8756d7da-tigera-ca-bundle\") pod \"calico-kube-controllers-bdff9fc5-g6ppj\" (UID: \"74e39d71-e729-442a-ad78-d80f8756d7da\") " pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" Oct 31 20:56:29.237612 kubelet[2732]: I1031 20:56:29.237274 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58rzq\" (UniqueName: \"kubernetes.io/projected/4809a88d-1206-463c-b992-8852b18c726f-kube-api-access-58rzq\") pod \"coredns-668d6bf9bc-4qcs2\" (UID: \"4809a88d-1206-463c-b992-8852b18c726f\") " pod="kube-system/coredns-668d6bf9bc-4qcs2" Oct 31 20:56:29.237723 kubelet[2732]: I1031 20:56:29.237706 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk5j8\" (UniqueName: \"kubernetes.io/projected/ef80b07d-34c2-483b-b1fb-77de41f9c304-kube-api-access-zk5j8\") pod \"goldmane-666569f655-bmt48\" (UID: \"ef80b07d-34c2-483b-b1fb-77de41f9c304\") " pod="calico-system/goldmane-666569f655-bmt48" Oct 31 20:56:29.237826 kubelet[2732]: I1031 20:56:29.237812 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts2tq\" (UniqueName: \"kubernetes.io/projected/d8950dfd-888a-4512-a9c0-edda8417ecdd-kube-api-access-ts2tq\") pod \"calico-apiserver-7cfd5bcf7c-q2hch\" (UID: \"d8950dfd-888a-4512-a9c0-edda8417ecdd\") " pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" Oct 31 20:56:29.237909 kubelet[2732]: I1031 20:56:29.237894 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4809a88d-1206-463c-b992-8852b18c726f-config-volume\") pod \"coredns-668d6bf9bc-4qcs2\" (UID: \"4809a88d-1206-463c-b992-8852b18c726f\") " pod="kube-system/coredns-668d6bf9bc-4qcs2" Oct 31 20:56:29.238038 kubelet[2732]: I1031 20:56:29.237988 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqvpz\" (UniqueName: \"kubernetes.io/projected/bba5ef03-9e42-43a6-ab98-a0179f6b153f-kube-api-access-lqvpz\") pod \"calico-apiserver-7cfd5bcf7c-d7m9d\" (UID: \"bba5ef03-9e42-43a6-ab98-a0179f6b153f\") " pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" Oct 31 20:56:29.238134 kubelet[2732]: I1031 20:56:29.238021 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef80b07d-34c2-483b-b1fb-77de41f9c304-goldmane-ca-bundle\") pod \"goldmane-666569f655-bmt48\" (UID: \"ef80b07d-34c2-483b-b1fb-77de41f9c304\") " pod="calico-system/goldmane-666569f655-bmt48" Oct 31 20:56:29.238301 kubelet[2732]: I1031 20:56:29.238213 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ef80b07d-34c2-483b-b1fb-77de41f9c304-goldmane-key-pair\") pod \"goldmane-666569f655-bmt48\" (UID: \"ef80b07d-34c2-483b-b1fb-77de41f9c304\") " pod="calico-system/goldmane-666569f655-bmt48" Oct 31 20:56:29.238301 kubelet[2732]: I1031 20:56:29.238246 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggh6p\" (UniqueName: \"kubernetes.io/projected/b5fcba99-ac61-4506-9ea4-62f848c483c1-kube-api-access-ggh6p\") pod \"coredns-668d6bf9bc-nqh7g\" (UID: \"b5fcba99-ac61-4506-9ea4-62f848c483c1\") " pod="kube-system/coredns-668d6bf9bc-nqh7g" Oct 31 20:56:29.238554 kubelet[2732]: I1031 20:56:29.238512 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d8950dfd-888a-4512-a9c0-edda8417ecdd-calico-apiserver-certs\") pod \"calico-apiserver-7cfd5bcf7c-q2hch\" (UID: \"d8950dfd-888a-4512-a9c0-edda8417ecdd\") " pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" Oct 31 20:56:29.238650 kubelet[2732]: I1031 20:56:29.238634 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-whisker-backend-key-pair\") pod \"whisker-7fd7b7cc-bwfpc\" (UID: \"8e2bc50f-cc6b-4d4f-8a65-c820c44acc89\") " pod="calico-system/whisker-7fd7b7cc-bwfpc" Oct 31 20:56:29.238792 kubelet[2732]: I1031 20:56:29.238717 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvs8x\" (UniqueName: \"kubernetes.io/projected/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-kube-api-access-tvs8x\") pod \"whisker-7fd7b7cc-bwfpc\" (UID: \"8e2bc50f-cc6b-4d4f-8a65-c820c44acc89\") " pod="calico-system/whisker-7fd7b7cc-bwfpc" Oct 31 20:56:29.238880 kubelet[2732]: I1031 20:56:29.238865 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5fcba99-ac61-4506-9ea4-62f848c483c1-config-volume\") pod \"coredns-668d6bf9bc-nqh7g\" (UID: \"b5fcba99-ac61-4506-9ea4-62f848c483c1\") " pod="kube-system/coredns-668d6bf9bc-nqh7g" Oct 31 20:56:29.238993 kubelet[2732]: I1031 20:56:29.238976 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-whisker-ca-bundle\") pod \"whisker-7fd7b7cc-bwfpc\" (UID: \"8e2bc50f-cc6b-4d4f-8a65-c820c44acc89\") " pod="calico-system/whisker-7fd7b7cc-bwfpc" Oct 31 20:56:29.239149 kubelet[2732]: I1031 20:56:29.239045 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb7wh\" (UniqueName: \"kubernetes.io/projected/74e39d71-e729-442a-ad78-d80f8756d7da-kube-api-access-bb7wh\") pod \"calico-kube-controllers-bdff9fc5-g6ppj\" (UID: \"74e39d71-e729-442a-ad78-d80f8756d7da\") " pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" Oct 31 20:56:29.239252 kubelet[2732]: I1031 20:56:29.239235 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bba5ef03-9e42-43a6-ab98-a0179f6b153f-calico-apiserver-certs\") pod \"calico-apiserver-7cfd5bcf7c-d7m9d\" (UID: \"bba5ef03-9e42-43a6-ab98-a0179f6b153f\") " pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" Oct 31 20:56:29.239390 kubelet[2732]: I1031 20:56:29.239317 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef80b07d-34c2-483b-b1fb-77de41f9c304-config\") pod \"goldmane-666569f655-bmt48\" (UID: \"ef80b07d-34c2-483b-b1fb-77de41f9c304\") " pod="calico-system/goldmane-666569f655-bmt48" Oct 31 20:56:29.242700 systemd[1]: Created slice kubepods-besteffort-pod8e2bc50f_cc6b_4d4f_8a65_c820c44acc89.slice - libcontainer container kubepods-besteffort-pod8e2bc50f_cc6b_4d4f_8a65_c820c44acc89.slice. Oct 31 20:56:29.249073 systemd[1]: Created slice kubepods-besteffort-pod74e39d71_e729_442a_ad78_d80f8756d7da.slice - libcontainer container kubepods-besteffort-pod74e39d71_e729_442a_ad78_d80f8756d7da.slice. Oct 31 20:56:29.256409 systemd[1]: Created slice kubepods-besteffort-podbba5ef03_9e42_43a6_ab98_a0179f6b153f.slice - libcontainer container kubepods-besteffort-podbba5ef03_9e42_43a6_ab98_a0179f6b153f.slice. Oct 31 20:56:29.262703 systemd[1]: Created slice kubepods-besteffort-podef80b07d_34c2_483b_b1fb_77de41f9c304.slice - libcontainer container kubepods-besteffort-podef80b07d_34c2_483b_b1fb_77de41f9c304.slice. Oct 31 20:56:29.525808 kubelet[2732]: E1031 20:56:29.525680 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:29.527714 containerd[1586]: time="2025-10-31T20:56:29.527659272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nqh7g,Uid:b5fcba99-ac61-4506-9ea4-62f848c483c1,Namespace:kube-system,Attempt:0,}" Oct 31 20:56:29.534409 kubelet[2732]: E1031 20:56:29.534356 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:29.535131 containerd[1586]: time="2025-10-31T20:56:29.535079808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4qcs2,Uid:4809a88d-1206-463c-b992-8852b18c726f,Namespace:kube-system,Attempt:0,}" Oct 31 20:56:29.543463 containerd[1586]: time="2025-10-31T20:56:29.541643033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd5bcf7c-q2hch,Uid:d8950dfd-888a-4512-a9c0-edda8417ecdd,Namespace:calico-apiserver,Attempt:0,}" Oct 31 20:56:29.547490 containerd[1586]: time="2025-10-31T20:56:29.547428204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fd7b7cc-bwfpc,Uid:8e2bc50f-cc6b-4d4f-8a65-c820c44acc89,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:29.552576 systemd[1]: Created slice kubepods-besteffort-pod85524551_e531_4ebd_be44_e40fd94305ba.slice - libcontainer container kubepods-besteffort-pod85524551_e531_4ebd_be44_e40fd94305ba.slice. Oct 31 20:56:29.555931 containerd[1586]: time="2025-10-31T20:56:29.555532732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bdff9fc5-g6ppj,Uid:74e39d71-e729-442a-ad78-d80f8756d7da,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:29.558349 containerd[1586]: time="2025-10-31T20:56:29.558307311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fzzpl,Uid:85524551-e531-4ebd-be44-e40fd94305ba,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:29.562745 containerd[1586]: time="2025-10-31T20:56:29.562702932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd5bcf7c-d7m9d,Uid:bba5ef03-9e42-43a6-ab98-a0179f6b153f,Namespace:calico-apiserver,Attempt:0,}" Oct 31 20:56:29.569641 containerd[1586]: time="2025-10-31T20:56:29.569589989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bmt48,Uid:ef80b07d-34c2-483b-b1fb-77de41f9c304,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:29.655422 kubelet[2732]: E1031 20:56:29.655381 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:29.658555 containerd[1586]: time="2025-10-31T20:56:29.657843122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 20:56:29.675333 containerd[1586]: time="2025-10-31T20:56:29.675268330Z" level=error msg="Failed to destroy network for sandbox \"714acf69b6c37918597092dd87e747d37341028e91cc1f2d663ea0cc7c9e8c1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.686478 containerd[1586]: time="2025-10-31T20:56:29.685812163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bdff9fc5-g6ppj,Uid:74e39d71-e729-442a-ad78-d80f8756d7da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"714acf69b6c37918597092dd87e747d37341028e91cc1f2d663ea0cc7c9e8c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.686643 kubelet[2732]: E1031 20:56:29.686067 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714acf69b6c37918597092dd87e747d37341028e91cc1f2d663ea0cc7c9e8c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.689356 kubelet[2732]: E1031 20:56:29.689304 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714acf69b6c37918597092dd87e747d37341028e91cc1f2d663ea0cc7c9e8c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" Oct 31 20:56:29.689656 kubelet[2732]: E1031 20:56:29.689611 2732 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714acf69b6c37918597092dd87e747d37341028e91cc1f2d663ea0cc7c9e8c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" Oct 31 20:56:29.689845 kubelet[2732]: E1031 20:56:29.689819 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bdff9fc5-g6ppj_calico-system(74e39d71-e729-442a-ad78-d80f8756d7da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bdff9fc5-g6ppj_calico-system(74e39d71-e729-442a-ad78-d80f8756d7da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"714acf69b6c37918597092dd87e747d37341028e91cc1f2d663ea0cc7c9e8c1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" podUID="74e39d71-e729-442a-ad78-d80f8756d7da" Oct 31 20:56:29.702315 containerd[1586]: time="2025-10-31T20:56:29.702263634Z" level=error msg="Failed to destroy network for sandbox \"8819b5485d96ec58d6733f71be8ff957cd39d19bb7848b2a0874287aee1594dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.707285 containerd[1586]: time="2025-10-31T20:56:29.707229182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nqh7g,Uid:b5fcba99-ac61-4506-9ea4-62f848c483c1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8819b5485d96ec58d6733f71be8ff957cd39d19bb7848b2a0874287aee1594dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.708243 kubelet[2732]: E1031 20:56:29.707642 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8819b5485d96ec58d6733f71be8ff957cd39d19bb7848b2a0874287aee1594dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.708243 kubelet[2732]: E1031 20:56:29.707696 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8819b5485d96ec58d6733f71be8ff957cd39d19bb7848b2a0874287aee1594dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nqh7g" Oct 31 20:56:29.708243 kubelet[2732]: E1031 20:56:29.707716 2732 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8819b5485d96ec58d6733f71be8ff957cd39d19bb7848b2a0874287aee1594dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nqh7g" Oct 31 20:56:29.708340 kubelet[2732]: E1031 20:56:29.707756 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nqh7g_kube-system(b5fcba99-ac61-4506-9ea4-62f848c483c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nqh7g_kube-system(b5fcba99-ac61-4506-9ea4-62f848c483c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8819b5485d96ec58d6733f71be8ff957cd39d19bb7848b2a0874287aee1594dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nqh7g" podUID="b5fcba99-ac61-4506-9ea4-62f848c483c1" Oct 31 20:56:29.721773 containerd[1586]: time="2025-10-31T20:56:29.721726377Z" level=error msg="Failed to destroy network for sandbox \"8964353715bceab1d541870f75b78cd5562954e38ba2692ebfe78a37bb85fd37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.722005 containerd[1586]: time="2025-10-31T20:56:29.721875491Z" level=error msg="Failed to destroy network for sandbox \"138e0c04e9fedce58ee97c01dcfce5e6d868c573c212a29fcac1be75754c6c48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.722351 containerd[1586]: time="2025-10-31T20:56:29.722302586Z" level=error msg="Failed to destroy network for sandbox \"9e7157f8751a69b86a350bb09c481e5e02cecb45aa8f9c05463d57252ff9a4a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.722450 containerd[1586]: time="2025-10-31T20:56:29.722421652Z" level=error msg="Failed to destroy network for sandbox \"4ade1ec5abdc5fc42935d5aba45b5ba1c055e4903b885d55fcd8844fbb717f13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.725632 containerd[1586]: time="2025-10-31T20:56:29.725568755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4qcs2,Uid:4809a88d-1206-463c-b992-8852b18c726f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"138e0c04e9fedce58ee97c01dcfce5e6d868c573c212a29fcac1be75754c6c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.725903 kubelet[2732]: E1031 20:56:29.725844 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"138e0c04e9fedce58ee97c01dcfce5e6d868c573c212a29fcac1be75754c6c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.725952 kubelet[2732]: E1031 20:56:29.725929 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"138e0c04e9fedce58ee97c01dcfce5e6d868c573c212a29fcac1be75754c6c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4qcs2" Oct 31 20:56:29.725976 kubelet[2732]: E1031 20:56:29.725949 2732 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"138e0c04e9fedce58ee97c01dcfce5e6d868c573c212a29fcac1be75754c6c48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4qcs2" Oct 31 20:56:29.726064 kubelet[2732]: E1031 20:56:29.726038 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4qcs2_kube-system(4809a88d-1206-463c-b992-8852b18c726f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4qcs2_kube-system(4809a88d-1206-463c-b992-8852b18c726f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"138e0c04e9fedce58ee97c01dcfce5e6d868c573c212a29fcac1be75754c6c48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4qcs2" podUID="4809a88d-1206-463c-b992-8852b18c726f" Oct 31 20:56:29.730342 containerd[1586]: time="2025-10-31T20:56:29.730289808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fzzpl,Uid:85524551-e531-4ebd-be44-e40fd94305ba,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8964353715bceab1d541870f75b78cd5562954e38ba2692ebfe78a37bb85fd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.730580 kubelet[2732]: E1031 20:56:29.730546 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8964353715bceab1d541870f75b78cd5562954e38ba2692ebfe78a37bb85fd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.730760 kubelet[2732]: E1031 20:56:29.730705 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8964353715bceab1d541870f75b78cd5562954e38ba2692ebfe78a37bb85fd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fzzpl" Oct 31 20:56:29.730760 kubelet[2732]: E1031 20:56:29.730733 2732 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8964353715bceab1d541870f75b78cd5562954e38ba2692ebfe78a37bb85fd37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fzzpl" Oct 31 20:56:29.730890 kubelet[2732]: E1031 20:56:29.730863 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fzzpl_calico-system(85524551-e531-4ebd-be44-e40fd94305ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fzzpl_calico-system(85524551-e531-4ebd-be44-e40fd94305ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8964353715bceab1d541870f75b78cd5562954e38ba2692ebfe78a37bb85fd37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:56:29.731977 containerd[1586]: time="2025-10-31T20:56:29.731243621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bmt48,Uid:ef80b07d-34c2-483b-b1fb-77de41f9c304,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7157f8751a69b86a350bb09c481e5e02cecb45aa8f9c05463d57252ff9a4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.732080 kubelet[2732]: E1031 20:56:29.731438 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7157f8751a69b86a350bb09c481e5e02cecb45aa8f9c05463d57252ff9a4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.732080 kubelet[2732]: E1031 20:56:29.731860 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7157f8751a69b86a350bb09c481e5e02cecb45aa8f9c05463d57252ff9a4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bmt48" Oct 31 20:56:29.732080 kubelet[2732]: E1031 20:56:29.731879 2732 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7157f8751a69b86a350bb09c481e5e02cecb45aa8f9c05463d57252ff9a4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bmt48" Oct 31 20:56:29.732229 kubelet[2732]: E1031 20:56:29.731917 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bmt48_calico-system(ef80b07d-34c2-483b-b1fb-77de41f9c304)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bmt48_calico-system(ef80b07d-34c2-483b-b1fb-77de41f9c304)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e7157f8751a69b86a350bb09c481e5e02cecb45aa8f9c05463d57252ff9a4a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bmt48" podUID="ef80b07d-34c2-483b-b1fb-77de41f9c304" Oct 31 20:56:29.732273 containerd[1586]: time="2025-10-31T20:56:29.732114295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd5bcf7c-q2hch,Uid:d8950dfd-888a-4512-a9c0-edda8417ecdd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ade1ec5abdc5fc42935d5aba45b5ba1c055e4903b885d55fcd8844fbb717f13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.732777 kubelet[2732]: E1031 20:56:29.732404 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ade1ec5abdc5fc42935d5aba45b5ba1c055e4903b885d55fcd8844fbb717f13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.732777 kubelet[2732]: E1031 20:56:29.732573 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ade1ec5abdc5fc42935d5aba45b5ba1c055e4903b885d55fcd8844fbb717f13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" Oct 31 20:56:29.732777 kubelet[2732]: E1031 20:56:29.732698 2732 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ade1ec5abdc5fc42935d5aba45b5ba1c055e4903b885d55fcd8844fbb717f13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" Oct 31 20:56:29.732894 kubelet[2732]: E1031 20:56:29.732740 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cfd5bcf7c-q2hch_calico-apiserver(d8950dfd-888a-4512-a9c0-edda8417ecdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cfd5bcf7c-q2hch_calico-apiserver(d8950dfd-888a-4512-a9c0-edda8417ecdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ade1ec5abdc5fc42935d5aba45b5ba1c055e4903b885d55fcd8844fbb717f13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" podUID="d8950dfd-888a-4512-a9c0-edda8417ecdd" Oct 31 20:56:29.738018 containerd[1586]: time="2025-10-31T20:56:29.737918150Z" level=error msg="Failed to destroy network for sandbox \"368fe9d1ec6b9f862e537a24cd646427ae4fa8e163187acb62ef770db94ed972\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.740535 containerd[1586]: time="2025-10-31T20:56:29.739629172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd5bcf7c-d7m9d,Uid:bba5ef03-9e42-43a6-ab98-a0179f6b153f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"368fe9d1ec6b9f862e537a24cd646427ae4fa8e163187acb62ef770db94ed972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.740682 kubelet[2732]: E1031 20:56:29.739833 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"368fe9d1ec6b9f862e537a24cd646427ae4fa8e163187acb62ef770db94ed972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.740682 kubelet[2732]: E1031 20:56:29.739886 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"368fe9d1ec6b9f862e537a24cd646427ae4fa8e163187acb62ef770db94ed972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" Oct 31 20:56:29.740682 kubelet[2732]: E1031 20:56:29.739905 2732 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"368fe9d1ec6b9f862e537a24cd646427ae4fa8e163187acb62ef770db94ed972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" Oct 31 20:56:29.742271 containerd[1586]: time="2025-10-31T20:56:29.742224431Z" level=error msg="Failed to destroy network for sandbox \"02c183d1806955325968af072abdc9697662a8ee9fb8edf145ed36ac87c0db70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.742872 kubelet[2732]: E1031 20:56:29.739945 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cfd5bcf7c-d7m9d_calico-apiserver(bba5ef03-9e42-43a6-ab98-a0179f6b153f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cfd5bcf7c-d7m9d_calico-apiserver(bba5ef03-9e42-43a6-ab98-a0179f6b153f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"368fe9d1ec6b9f862e537a24cd646427ae4fa8e163187acb62ef770db94ed972\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" podUID="bba5ef03-9e42-43a6-ab98-a0179f6b153f" Oct 31 20:56:29.746900 containerd[1586]: time="2025-10-31T20:56:29.746841222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fd7b7cc-bwfpc,Uid:8e2bc50f-cc6b-4d4f-8a65-c820c44acc89,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c183d1806955325968af072abdc9697662a8ee9fb8edf145ed36ac87c0db70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.747477 kubelet[2732]: E1031 20:56:29.747434 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c183d1806955325968af072abdc9697662a8ee9fb8edf145ed36ac87c0db70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 20:56:29.747544 kubelet[2732]: E1031 20:56:29.747494 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c183d1806955325968af072abdc9697662a8ee9fb8edf145ed36ac87c0db70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fd7b7cc-bwfpc" Oct 31 20:56:29.747544 kubelet[2732]: E1031 20:56:29.747513 2732 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02c183d1806955325968af072abdc9697662a8ee9fb8edf145ed36ac87c0db70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fd7b7cc-bwfpc" Oct 31 20:56:29.747622 kubelet[2732]: E1031 20:56:29.747549 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fd7b7cc-bwfpc_calico-system(8e2bc50f-cc6b-4d4f-8a65-c820c44acc89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fd7b7cc-bwfpc_calico-system(8e2bc50f-cc6b-4d4f-8a65-c820c44acc89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02c183d1806955325968af072abdc9697662a8ee9fb8edf145ed36ac87c0db70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fd7b7cc-bwfpc" podUID="8e2bc50f-cc6b-4d4f-8a65-c820c44acc89" Oct 31 20:56:30.478520 systemd[1]: run-netns-cni\x2d9d70b012\x2d6179\x2dbc2e\x2d8bb1\x2d2bfe8f777cd3.mount: Deactivated successfully. Oct 31 20:56:30.478631 systemd[1]: run-netns-cni\x2d48ce3f54\x2d4284\x2deaba\x2ddaa6\x2da49acd175a40.mount: Deactivated successfully. Oct 31 20:56:30.478690 systemd[1]: run-netns-cni\x2da8584b66\x2d39a1\x2de51b\x2d4044\x2dfaed5b65767d.mount: Deactivated successfully. Oct 31 20:56:30.478734 systemd[1]: run-netns-cni\x2d68e72a34\x2d55f5\x2d1a99\x2dc186\x2dc386ce5ba22e.mount: Deactivated successfully. Oct 31 20:56:33.563454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3790291877.mount: Deactivated successfully. Oct 31 20:56:33.845903 containerd[1586]: time="2025-10-31T20:56:33.845752765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:33.848580 containerd[1586]: time="2025-10-31T20:56:33.848506520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Oct 31 20:56:33.850750 containerd[1586]: time="2025-10-31T20:56:33.850709699Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:33.859883 containerd[1586]: time="2025-10-31T20:56:33.859834953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 20:56:33.862649 containerd[1586]: time="2025-10-31T20:56:33.862518535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.204628603s" Oct 31 20:56:33.862649 containerd[1586]: time="2025-10-31T20:56:33.862557662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 31 20:56:33.902862 containerd[1586]: time="2025-10-31T20:56:33.902822123Z" level=info msg="CreateContainer within sandbox \"96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 20:56:33.918673 containerd[1586]: time="2025-10-31T20:56:33.916883147Z" level=info msg="Container a361f720df25053a2b2dc199cb8d7b0eaf6282e862dd85e5059c01d3c9049892: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:56:33.926740 containerd[1586]: time="2025-10-31T20:56:33.926700159Z" level=info msg="CreateContainer within sandbox \"96093b6db0c7707131ab4c2b6e2289f2f09bd0afc4526780b216762ba771800e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a361f720df25053a2b2dc199cb8d7b0eaf6282e862dd85e5059c01d3c9049892\"" Oct 31 20:56:33.927418 containerd[1586]: time="2025-10-31T20:56:33.927382236Z" level=info msg="StartContainer for \"a361f720df25053a2b2dc199cb8d7b0eaf6282e862dd85e5059c01d3c9049892\"" Oct 31 20:56:33.929215 containerd[1586]: time="2025-10-31T20:56:33.929182907Z" level=info msg="connecting to shim a361f720df25053a2b2dc199cb8d7b0eaf6282e862dd85e5059c01d3c9049892" address="unix:///run/containerd/s/bdf3512b48cd83297b2d3b3f7d346e1869401ce7eccb78f0d53d43c9b689a3cb" protocol=ttrpc version=3 Oct 31 20:56:33.957295 systemd[1]: Started cri-containerd-a361f720df25053a2b2dc199cb8d7b0eaf6282e862dd85e5059c01d3c9049892.scope - libcontainer container a361f720df25053a2b2dc199cb8d7b0eaf6282e862dd85e5059c01d3c9049892. Oct 31 20:56:34.001348 containerd[1586]: time="2025-10-31T20:56:34.001273972Z" level=info msg="StartContainer for \"a361f720df25053a2b2dc199cb8d7b0eaf6282e862dd85e5059c01d3c9049892\" returns successfully" Oct 31 20:56:34.122081 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 20:56:34.122310 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 20:56:34.269313 kubelet[2732]: I1031 20:56:34.269076 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-whisker-backend-key-pair\") pod \"8e2bc50f-cc6b-4d4f-8a65-c820c44acc89\" (UID: \"8e2bc50f-cc6b-4d4f-8a65-c820c44acc89\") " Oct 31 20:56:34.270239 kubelet[2732]: I1031 20:56:34.269387 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-whisker-ca-bundle\") pod \"8e2bc50f-cc6b-4d4f-8a65-c820c44acc89\" (UID: \"8e2bc50f-cc6b-4d4f-8a65-c820c44acc89\") " Oct 31 20:56:34.270239 kubelet[2732]: I1031 20:56:34.269416 2732 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvs8x\" (UniqueName: \"kubernetes.io/projected/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-kube-api-access-tvs8x\") pod \"8e2bc50f-cc6b-4d4f-8a65-c820c44acc89\" (UID: \"8e2bc50f-cc6b-4d4f-8a65-c820c44acc89\") " Oct 31 20:56:34.270890 kubelet[2732]: I1031 20:56:34.270242 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8e2bc50f-cc6b-4d4f-8a65-c820c44acc89" (UID: "8e2bc50f-cc6b-4d4f-8a65-c820c44acc89"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 20:56:34.275821 kubelet[2732]: I1031 20:56:34.275054 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-kube-api-access-tvs8x" (OuterVolumeSpecName: "kube-api-access-tvs8x") pod "8e2bc50f-cc6b-4d4f-8a65-c820c44acc89" (UID: "8e2bc50f-cc6b-4d4f-8a65-c820c44acc89"). InnerVolumeSpecName "kube-api-access-tvs8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 20:56:34.280127 kubelet[2732]: I1031 20:56:34.279756 2732 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8e2bc50f-cc6b-4d4f-8a65-c820c44acc89" (UID: "8e2bc50f-cc6b-4d4f-8a65-c820c44acc89"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 20:56:34.371274 kubelet[2732]: I1031 20:56:34.371234 2732 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tvs8x\" (UniqueName: \"kubernetes.io/projected/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-kube-api-access-tvs8x\") on node \"localhost\" DevicePath \"\"" Oct 31 20:56:34.371451 kubelet[2732]: I1031 20:56:34.371437 2732 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 20:56:34.371512 kubelet[2732]: I1031 20:56:34.371502 2732 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 20:56:34.563240 systemd[1]: var-lib-kubelet-pods-8e2bc50f\x2dcc6b\x2d4d4f\x2d8a65\x2dc820c44acc89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtvs8x.mount: Deactivated successfully. Oct 31 20:56:34.563333 systemd[1]: var-lib-kubelet-pods-8e2bc50f\x2dcc6b\x2d4d4f\x2d8a65\x2dc820c44acc89-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 20:56:34.677844 kubelet[2732]: E1031 20:56:34.677690 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:34.679189 systemd[1]: Removed slice kubepods-besteffort-pod8e2bc50f_cc6b_4d4f_8a65_c820c44acc89.slice - libcontainer container kubepods-besteffort-pod8e2bc50f_cc6b_4d4f_8a65_c820c44acc89.slice. Oct 31 20:56:34.697852 kubelet[2732]: I1031 20:56:34.697752 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4fhwh" podStartSLOduration=1.8514883690000001 podStartE2EDuration="13.697695561s" podCreationTimestamp="2025-10-31 20:56:21 +0000 UTC" firstStartedPulling="2025-10-31 20:56:22.037715193 +0000 UTC m=+22.591013449" lastFinishedPulling="2025-10-31 20:56:33.883922385 +0000 UTC m=+34.437220641" observedRunningTime="2025-10-31 20:56:34.697604466 +0000 UTC m=+35.250902762" watchObservedRunningTime="2025-10-31 20:56:34.697695561 +0000 UTC m=+35.250993857" Oct 31 20:56:34.753514 systemd[1]: Created slice kubepods-besteffort-pod96bbfa9f_5f82_4712_bb67_615baa536087.slice - libcontainer container kubepods-besteffort-pod96bbfa9f_5f82_4712_bb67_615baa536087.slice. Oct 31 20:56:34.774475 kubelet[2732]: I1031 20:56:34.774436 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fh5\" (UniqueName: \"kubernetes.io/projected/96bbfa9f-5f82-4712-bb67-615baa536087-kube-api-access-z5fh5\") pod \"whisker-6f84b54649-5z922\" (UID: \"96bbfa9f-5f82-4712-bb67-615baa536087\") " pod="calico-system/whisker-6f84b54649-5z922" Oct 31 20:56:34.774621 kubelet[2732]: I1031 20:56:34.774520 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/96bbfa9f-5f82-4712-bb67-615baa536087-whisker-backend-key-pair\") pod \"whisker-6f84b54649-5z922\" (UID: \"96bbfa9f-5f82-4712-bb67-615baa536087\") " pod="calico-system/whisker-6f84b54649-5z922" Oct 31 20:56:34.774621 kubelet[2732]: I1031 20:56:34.774558 2732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96bbfa9f-5f82-4712-bb67-615baa536087-whisker-ca-bundle\") pod \"whisker-6f84b54649-5z922\" (UID: \"96bbfa9f-5f82-4712-bb67-615baa536087\") " pod="calico-system/whisker-6f84b54649-5z922" Oct 31 20:56:35.058383 containerd[1586]: time="2025-10-31T20:56:35.058315196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f84b54649-5z922,Uid:96bbfa9f-5f82-4712-bb67-615baa536087,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:35.238237 systemd-networkd[1504]: calic7021abaf7b: Link UP Oct 31 20:56:35.238656 systemd-networkd[1504]: calic7021abaf7b: Gained carrier Oct 31 20:56:35.253220 containerd[1586]: 2025-10-31 20:56:35.083 [INFO][3881] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 20:56:35.253220 containerd[1586]: 2025-10-31 20:56:35.113 [INFO][3881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6f84b54649--5z922-eth0 whisker-6f84b54649- calico-system 96bbfa9f-5f82-4712-bb67-615baa536087 935 0 2025-10-31 20:56:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f84b54649 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6f84b54649-5z922 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic7021abaf7b [] [] }} ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Namespace="calico-system" Pod="whisker-6f84b54649-5z922" WorkloadEndpoint="localhost-k8s-whisker--6f84b54649--5z922-" Oct 31 20:56:35.253220 containerd[1586]: 2025-10-31 20:56:35.113 [INFO][3881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Namespace="calico-system" Pod="whisker-6f84b54649-5z922" WorkloadEndpoint="localhost-k8s-whisker--6f84b54649--5z922-eth0" Oct 31 20:56:35.253220 containerd[1586]: 2025-10-31 20:56:35.189 [INFO][3896] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" HandleID="k8s-pod-network.47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Workload="localhost-k8s-whisker--6f84b54649--5z922-eth0" Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.189 [INFO][3896] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" HandleID="k8s-pod-network.47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Workload="localhost-k8s-whisker--6f84b54649--5z922-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502f80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6f84b54649-5z922", "timestamp":"2025-10-31 20:56:35.189578884 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.189 [INFO][3896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.190 [INFO][3896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.190 [INFO][3896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.200 [INFO][3896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" host="localhost" Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.206 [INFO][3896] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.211 [INFO][3896] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.213 [INFO][3896] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.216 [INFO][3896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:35.253472 containerd[1586]: 2025-10-31 20:56:35.216 [INFO][3896] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" host="localhost" Oct 31 20:56:35.253668 containerd[1586]: 2025-10-31 20:56:35.218 [INFO][3896] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1 Oct 31 20:56:35.253668 containerd[1586]: 2025-10-31 20:56:35.222 [INFO][3896] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" host="localhost" Oct 31 20:56:35.253668 containerd[1586]: 2025-10-31 20:56:35.227 [INFO][3896] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" host="localhost" Oct 31 20:56:35.253668 containerd[1586]: 2025-10-31 20:56:35.227 [INFO][3896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" host="localhost" Oct 31 20:56:35.253668 containerd[1586]: 2025-10-31 20:56:35.227 [INFO][3896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 20:56:35.253668 containerd[1586]: 2025-10-31 20:56:35.227 [INFO][3896] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" HandleID="k8s-pod-network.47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Workload="localhost-k8s-whisker--6f84b54649--5z922-eth0" Oct 31 20:56:35.253777 containerd[1586]: 2025-10-31 20:56:35.230 [INFO][3881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Namespace="calico-system" Pod="whisker-6f84b54649-5z922" WorkloadEndpoint="localhost-k8s-whisker--6f84b54649--5z922-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f84b54649--5z922-eth0", GenerateName:"whisker-6f84b54649-", Namespace:"calico-system", SelfLink:"", UID:"96bbfa9f-5f82-4712-bb67-615baa536087", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f84b54649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6f84b54649-5z922", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic7021abaf7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:35.253777 containerd[1586]: 2025-10-31 20:56:35.230 [INFO][3881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Namespace="calico-system" Pod="whisker-6f84b54649-5z922" WorkloadEndpoint="localhost-k8s-whisker--6f84b54649--5z922-eth0" Oct 31 20:56:35.253840 containerd[1586]: 2025-10-31 20:56:35.230 [INFO][3881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7021abaf7b ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Namespace="calico-system" Pod="whisker-6f84b54649-5z922" WorkloadEndpoint="localhost-k8s-whisker--6f84b54649--5z922-eth0" Oct 31 20:56:35.253840 containerd[1586]: 2025-10-31 20:56:35.239 [INFO][3881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Namespace="calico-system" Pod="whisker-6f84b54649-5z922" WorkloadEndpoint="localhost-k8s-whisker--6f84b54649--5z922-eth0" Oct 31 20:56:35.253878 containerd[1586]: 2025-10-31 20:56:35.240 [INFO][3881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Namespace="calico-system" Pod="whisker-6f84b54649-5z922" WorkloadEndpoint="localhost-k8s-whisker--6f84b54649--5z922-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f84b54649--5z922-eth0", GenerateName:"whisker-6f84b54649-", Namespace:"calico-system", SelfLink:"", UID:"96bbfa9f-5f82-4712-bb67-615baa536087", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f84b54649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1", Pod:"whisker-6f84b54649-5z922", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic7021abaf7b", MAC:"7e:ae:70:0f:fa:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:35.253925 containerd[1586]: 2025-10-31 20:56:35.250 [INFO][3881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" Namespace="calico-system" Pod="whisker-6f84b54649-5z922" WorkloadEndpoint="localhost-k8s-whisker--6f84b54649--5z922-eth0" Oct 31 20:56:35.380751 containerd[1586]: time="2025-10-31T20:56:35.380621712Z" level=info msg="connecting to shim 47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1" address="unix:///run/containerd/s/3bc1903bfd9f19bc9884352fc0c156c86afc580276e8c6dc13ffc7d56a1bcf62" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:35.409336 systemd[1]: Started cri-containerd-47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1.scope - libcontainer container 47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1. Oct 31 20:56:35.443404 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 20:56:35.533012 containerd[1586]: time="2025-10-31T20:56:35.532919393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f84b54649-5z922,Uid:96bbfa9f-5f82-4712-bb67-615baa536087,Namespace:calico-system,Attempt:0,} returns sandbox id \"47485cfaf02806e22a1ff82846bb2a2c34a7145252548ab66d0c0dbbd8187fa1\"" Oct 31 20:56:35.543921 kubelet[2732]: I1031 20:56:35.542999 2732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2bc50f-cc6b-4d4f-8a65-c820c44acc89" path="/var/lib/kubelet/pods/8e2bc50f-cc6b-4d4f-8a65-c820c44acc89/volumes" Oct 31 20:56:35.569577 containerd[1586]: time="2025-10-31T20:56:35.568520318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 20:56:35.677659 kubelet[2732]: I1031 20:56:35.677532 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 20:56:35.678046 kubelet[2732]: E1031 20:56:35.678018 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:35.797445 containerd[1586]: time="2025-10-31T20:56:35.797385728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:35.798579 containerd[1586]: time="2025-10-31T20:56:35.798481828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 20:56:35.798716 containerd[1586]: time="2025-10-31T20:56:35.798566962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:35.800149 kubelet[2732]: E1031 20:56:35.800099 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 20:56:35.800246 kubelet[2732]: E1031 20:56:35.800169 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 20:56:35.803921 kubelet[2732]: E1031 20:56:35.803851 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f1d5cb8199114639b2814101b4b71df3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5fh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f84b54649-5z922_calico-system(96bbfa9f-5f82-4712-bb67-615baa536087): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:35.806149 containerd[1586]: time="2025-10-31T20:56:35.806060872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 20:56:35.968924 systemd-networkd[1504]: vxlan.calico: Link UP Oct 31 20:56:35.968931 systemd-networkd[1504]: vxlan.calico: Gained carrier Oct 31 20:56:36.036224 containerd[1586]: time="2025-10-31T20:56:36.036170363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:36.036988 containerd[1586]: time="2025-10-31T20:56:36.036944491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 20:56:36.037070 containerd[1586]: time="2025-10-31T20:56:36.037033941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:36.037303 kubelet[2732]: E1031 20:56:36.037261 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 20:56:36.037386 kubelet[2732]: E1031 20:56:36.037316 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 20:56:36.037507 kubelet[2732]: E1031 20:56:36.037438 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5fh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f84b54649-5z922_calico-system(96bbfa9f-5f82-4712-bb67-615baa536087): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:36.038694 kubelet[2732]: E1031 20:56:36.038627 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f84b54649-5z922" podUID="96bbfa9f-5f82-4712-bb67-615baa536087" Oct 31 20:56:36.560235 systemd-networkd[1504]: calic7021abaf7b: Gained IPv6LL Oct 31 20:56:36.678571 kubelet[2732]: E1031 20:56:36.678348 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f84b54649-5z922" podUID="96bbfa9f-5f82-4712-bb67-615baa536087" Oct 31 20:56:37.584213 systemd-networkd[1504]: vxlan.calico: Gained IPv6LL Oct 31 20:56:37.688211 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:37530.service - OpenSSH per-connection server daemon (10.0.0.1:37530). Oct 31 20:56:37.756355 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 37530 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:37.757730 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:37.761617 systemd-logind[1572]: New session 8 of user core. Oct 31 20:56:37.767299 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 20:56:37.866785 sshd[4172]: Connection closed by 10.0.0.1 port 37530 Oct 31 20:56:37.867161 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:37.870636 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:37530.service: Deactivated successfully. Oct 31 20:56:37.873412 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 20:56:37.874176 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. Oct 31 20:56:37.875032 systemd-logind[1572]: Removed session 8. Oct 31 20:56:40.534490 containerd[1586]: time="2025-10-31T20:56:40.534431253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd5bcf7c-d7m9d,Uid:bba5ef03-9e42-43a6-ab98-a0179f6b153f,Namespace:calico-apiserver,Attempt:0,}" Oct 31 20:56:40.539610 containerd[1586]: time="2025-10-31T20:56:40.534569867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fzzpl,Uid:85524551-e531-4ebd-be44-e40fd94305ba,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:40.675542 systemd-networkd[1504]: cali54f2d6b80be: Link UP Oct 31 20:56:40.677163 systemd-networkd[1504]: cali54f2d6b80be: Gained carrier Oct 31 20:56:40.692342 containerd[1586]: 2025-10-31 20:56:40.590 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0 calico-apiserver-7cfd5bcf7c- calico-apiserver bba5ef03-9e42-43a6-ab98-a0179f6b153f 873 0 2025-10-31 20:56:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cfd5bcf7c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cfd5bcf7c-d7m9d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali54f2d6b80be [] [] }} ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-d7m9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-" Oct 31 20:56:40.692342 containerd[1586]: 2025-10-31 20:56:40.590 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-d7m9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" Oct 31 20:56:40.692342 containerd[1586]: 2025-10-31 20:56:40.627 [INFO][4215] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" HandleID="k8s-pod-network.03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Workload="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.627 [INFO][4215] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" HandleID="k8s-pod-network.03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Workload="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cfd5bcf7c-d7m9d", "timestamp":"2025-10-31 20:56:40.627570814 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.627 [INFO][4215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.628 [INFO][4215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.628 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.637 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" host="localhost" Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.647 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.652 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.654 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.657 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:40.692607 containerd[1586]: 2025-10-31 20:56:40.657 [INFO][4215] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" host="localhost" Oct 31 20:56:40.692963 containerd[1586]: 2025-10-31 20:56:40.659 [INFO][4215] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48 Oct 31 20:56:40.692963 containerd[1586]: 2025-10-31 20:56:40.663 [INFO][4215] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" host="localhost" Oct 31 20:56:40.692963 containerd[1586]: 2025-10-31 20:56:40.670 [INFO][4215] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" host="localhost" Oct 31 20:56:40.692963 containerd[1586]: 2025-10-31 20:56:40.670 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" host="localhost" Oct 31 20:56:40.692963 containerd[1586]: 2025-10-31 20:56:40.670 [INFO][4215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 20:56:40.692963 containerd[1586]: 2025-10-31 20:56:40.670 [INFO][4215] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" HandleID="k8s-pod-network.03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Workload="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" Oct 31 20:56:40.693255 containerd[1586]: 2025-10-31 20:56:40.672 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-d7m9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0", GenerateName:"calico-apiserver-7cfd5bcf7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"bba5ef03-9e42-43a6-ab98-a0179f6b153f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd5bcf7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cfd5bcf7c-d7m9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54f2d6b80be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:40.693344 containerd[1586]: 2025-10-31 20:56:40.673 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-d7m9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" Oct 31 20:56:40.693344 containerd[1586]: 2025-10-31 20:56:40.673 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54f2d6b80be ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-d7m9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" Oct 31 20:56:40.693344 containerd[1586]: 2025-10-31 20:56:40.676 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-d7m9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" Oct 31 20:56:40.693434 containerd[1586]: 2025-10-31 20:56:40.677 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-d7m9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0", GenerateName:"calico-apiserver-7cfd5bcf7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"bba5ef03-9e42-43a6-ab98-a0179f6b153f", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd5bcf7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48", Pod:"calico-apiserver-7cfd5bcf7c-d7m9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54f2d6b80be", MAC:"fa:3f:28:49:6c:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:40.693496 containerd[1586]: 2025-10-31 20:56:40.689 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-d7m9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--d7m9d-eth0" Oct 31 20:56:40.717742 containerd[1586]: time="2025-10-31T20:56:40.717699148Z" level=info msg="connecting to shim 03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48" address="unix:///run/containerd/s/553c9b485c649b5eb90682daeae7bd314bc4885fb69375efbd0f9947965c4362" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:40.741197 systemd[1]: Started cri-containerd-03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48.scope - libcontainer container 03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48. Oct 31 20:56:40.771381 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 20:56:40.773379 systemd-networkd[1504]: cali410e35a24e6: Link UP Oct 31 20:56:40.774180 systemd-networkd[1504]: cali410e35a24e6: Gained carrier Oct 31 20:56:40.794387 containerd[1586]: 2025-10-31 20:56:40.604 [INFO][4193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fzzpl-eth0 csi-node-driver- calico-system 85524551-e531-4ebd-be44-e40fd94305ba 764 0 2025-10-31 20:56:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fzzpl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali410e35a24e6 [] [] }} ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Namespace="calico-system" Pod="csi-node-driver-fzzpl" WorkloadEndpoint="localhost-k8s-csi--node--driver--fzzpl-" Oct 31 20:56:40.794387 containerd[1586]: 2025-10-31 20:56:40.604 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Namespace="calico-system" Pod="csi-node-driver-fzzpl" WorkloadEndpoint="localhost-k8s-csi--node--driver--fzzpl-eth0" Oct 31 20:56:40.794387 containerd[1586]: 2025-10-31 20:56:40.637 [INFO][4223] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" HandleID="k8s-pod-network.c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Workload="localhost-k8s-csi--node--driver--fzzpl-eth0" Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.637 [INFO][4223] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" HandleID="k8s-pod-network.c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Workload="localhost-k8s-csi--node--driver--fzzpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fzzpl", "timestamp":"2025-10-31 20:56:40.637758451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.637 [INFO][4223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.670 [INFO][4223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.670 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.738 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" host="localhost" Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.747 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.752 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.753 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.755 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:40.794551 containerd[1586]: 2025-10-31 20:56:40.755 [INFO][4223] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" host="localhost" Oct 31 20:56:40.794819 containerd[1586]: 2025-10-31 20:56:40.757 [INFO][4223] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88 Oct 31 20:56:40.794819 containerd[1586]: 2025-10-31 20:56:40.761 [INFO][4223] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" host="localhost" Oct 31 20:56:40.794819 containerd[1586]: 2025-10-31 20:56:40.766 [INFO][4223] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" host="localhost" Oct 31 20:56:40.794819 containerd[1586]: 2025-10-31 20:56:40.766 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" host="localhost" Oct 31 20:56:40.794819 containerd[1586]: 2025-10-31 20:56:40.766 [INFO][4223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 20:56:40.794819 containerd[1586]: 2025-10-31 20:56:40.766 [INFO][4223] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" HandleID="k8s-pod-network.c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Workload="localhost-k8s-csi--node--driver--fzzpl-eth0" Oct 31 20:56:40.794945 containerd[1586]: 2025-10-31 20:56:40.770 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Namespace="calico-system" Pod="csi-node-driver-fzzpl" WorkloadEndpoint="localhost-k8s-csi--node--driver--fzzpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fzzpl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85524551-e531-4ebd-be44-e40fd94305ba", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fzzpl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali410e35a24e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:40.794997 containerd[1586]: 2025-10-31 20:56:40.770 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Namespace="calico-system" Pod="csi-node-driver-fzzpl" WorkloadEndpoint="localhost-k8s-csi--node--driver--fzzpl-eth0" Oct 31 20:56:40.794997 containerd[1586]: 2025-10-31 20:56:40.770 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali410e35a24e6 ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Namespace="calico-system" Pod="csi-node-driver-fzzpl" WorkloadEndpoint="localhost-k8s-csi--node--driver--fzzpl-eth0" Oct 31 20:56:40.794997 containerd[1586]: 2025-10-31 20:56:40.774 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Namespace="calico-system" Pod="csi-node-driver-fzzpl" WorkloadEndpoint="localhost-k8s-csi--node--driver--fzzpl-eth0" Oct 31 20:56:40.795050 containerd[1586]: 2025-10-31 20:56:40.775 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Namespace="calico-system" Pod="csi-node-driver-fzzpl" WorkloadEndpoint="localhost-k8s-csi--node--driver--fzzpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fzzpl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85524551-e531-4ebd-be44-e40fd94305ba", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88", Pod:"csi-node-driver-fzzpl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali410e35a24e6", MAC:"6a:fb:e0:7a:23:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:40.795125 containerd[1586]: 2025-10-31 20:56:40.786 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" Namespace="calico-system" Pod="csi-node-driver-fzzpl" WorkloadEndpoint="localhost-k8s-csi--node--driver--fzzpl-eth0" Oct 31 20:56:40.804527 containerd[1586]: time="2025-10-31T20:56:40.804422456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd5bcf7c-d7m9d,Uid:bba5ef03-9e42-43a6-ab98-a0179f6b153f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"03f68e01ccbd2b9ebff6364d7d3174da8f47e7ee828c6641d59a455074df5f48\"" Oct 31 20:56:40.806400 containerd[1586]: time="2025-10-31T20:56:40.806357573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 20:56:40.816821 containerd[1586]: time="2025-10-31T20:56:40.816770913Z" level=info msg="connecting to shim c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88" address="unix:///run/containerd/s/b9420543ba0d43b6f72d44a14ce5fa59b302c1c54fe9049a8b101906ee7ce46f" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:40.841253 systemd[1]: Started cri-containerd-c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88.scope - libcontainer container c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88. Oct 31 20:56:40.851013 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 20:56:40.862306 containerd[1586]: time="2025-10-31T20:56:40.862272425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fzzpl,Uid:85524551-e531-4ebd-be44-e40fd94305ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8eb081e842ca83463d2abe0c4605fdc9b06067dbd37745d0fd56d1beb5dba88\"" Oct 31 20:56:41.009294 containerd[1586]: time="2025-10-31T20:56:41.009228559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:41.010069 containerd[1586]: time="2025-10-31T20:56:41.010037359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 20:56:41.010191 containerd[1586]: time="2025-10-31T20:56:41.010106126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:41.010303 kubelet[2732]: E1031 20:56:41.010253 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 20:56:41.010303 kubelet[2732]: E1031 20:56:41.010294 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 20:56:41.010679 kubelet[2732]: E1031 20:56:41.010502 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqvpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd5bcf7c-d7m9d_calico-apiserver(bba5ef03-9e42-43a6-ab98-a0179f6b153f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:41.010827 containerd[1586]: time="2025-10-31T20:56:41.010538329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 20:56:41.011994 kubelet[2732]: E1031 20:56:41.011954 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" podUID="bba5ef03-9e42-43a6-ab98-a0179f6b153f" Oct 31 20:56:41.225471 containerd[1586]: time="2025-10-31T20:56:41.225152257Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:41.226130 containerd[1586]: time="2025-10-31T20:56:41.226065268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 20:56:41.226339 containerd[1586]: time="2025-10-31T20:56:41.226163237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:41.226521 kubelet[2732]: E1031 20:56:41.226337 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 20:56:41.226521 kubelet[2732]: E1031 20:56:41.226390 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 20:56:41.226606 kubelet[2732]: E1031 20:56:41.226510 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kjzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fzzpl_calico-system(85524551-e531-4ebd-be44-e40fd94305ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:41.228401 containerd[1586]: time="2025-10-31T20:56:41.228378337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 20:56:41.440337 containerd[1586]: time="2025-10-31T20:56:41.440166906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:41.441205 containerd[1586]: time="2025-10-31T20:56:41.441165724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 20:56:41.441388 containerd[1586]: time="2025-10-31T20:56:41.441229251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:41.441550 kubelet[2732]: E1031 20:56:41.441499 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 20:56:41.441602 kubelet[2732]: E1031 20:56:41.441558 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 20:56:41.441715 kubelet[2732]: E1031 20:56:41.441672 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kjzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fzzpl_calico-system(85524551-e531-4ebd-be44-e40fd94305ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:41.442920 kubelet[2732]: E1031 20:56:41.442866 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:56:41.534810 containerd[1586]: time="2025-10-31T20:56:41.534772992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bdff9fc5-g6ppj,Uid:74e39d71-e729-442a-ad78-d80f8756d7da,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:41.671257 systemd-networkd[1504]: cali7dc4e94b375: Link UP Oct 31 20:56:41.671489 systemd-networkd[1504]: cali7dc4e94b375: Gained carrier Oct 31 20:56:41.685635 containerd[1586]: 2025-10-31 20:56:41.610 [INFO][4350] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0 calico-kube-controllers-bdff9fc5- calico-system 74e39d71-e729-442a-ad78-d80f8756d7da 871 0 2025-10-31 20:56:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bdff9fc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bdff9fc5-g6ppj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7dc4e94b375 [] [] }} ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Namespace="calico-system" Pod="calico-kube-controllers-bdff9fc5-g6ppj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-" Oct 31 20:56:41.685635 containerd[1586]: 2025-10-31 20:56:41.611 [INFO][4350] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Namespace="calico-system" Pod="calico-kube-controllers-bdff9fc5-g6ppj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" Oct 31 20:56:41.685635 containerd[1586]: 2025-10-31 20:56:41.634 [INFO][4365] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" HandleID="k8s-pod-network.a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Workload="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.635 [INFO][4365] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" HandleID="k8s-pod-network.a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Workload="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000119610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bdff9fc5-g6ppj", "timestamp":"2025-10-31 20:56:41.634901186 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.635 [INFO][4365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.635 [INFO][4365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.635 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.644 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" host="localhost" Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.648 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.651 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.653 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.655 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:41.685825 containerd[1586]: 2025-10-31 20:56:41.655 [INFO][4365] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" host="localhost" Oct 31 20:56:41.686025 containerd[1586]: 2025-10-31 20:56:41.656 [INFO][4365] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4 Oct 31 20:56:41.686025 containerd[1586]: 2025-10-31 20:56:41.660 [INFO][4365] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" host="localhost" Oct 31 20:56:41.686025 containerd[1586]: 2025-10-31 20:56:41.666 [INFO][4365] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" host="localhost" Oct 31 20:56:41.686025 containerd[1586]: 2025-10-31 20:56:41.666 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" host="localhost" Oct 31 20:56:41.686025 containerd[1586]: 2025-10-31 20:56:41.666 [INFO][4365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 20:56:41.686025 containerd[1586]: 2025-10-31 20:56:41.666 [INFO][4365] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" HandleID="k8s-pod-network.a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Workload="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" Oct 31 20:56:41.686148 containerd[1586]: 2025-10-31 20:56:41.668 [INFO][4350] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Namespace="calico-system" Pod="calico-kube-controllers-bdff9fc5-g6ppj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0", GenerateName:"calico-kube-controllers-bdff9fc5-", Namespace:"calico-system", SelfLink:"", UID:"74e39d71-e729-442a-ad78-d80f8756d7da", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bdff9fc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bdff9fc5-g6ppj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7dc4e94b375", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:41.686197 containerd[1586]: 2025-10-31 20:56:41.668 [INFO][4350] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Namespace="calico-system" Pod="calico-kube-controllers-bdff9fc5-g6ppj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" Oct 31 20:56:41.686197 containerd[1586]: 2025-10-31 20:56:41.668 [INFO][4350] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dc4e94b375 ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Namespace="calico-system" Pod="calico-kube-controllers-bdff9fc5-g6ppj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" Oct 31 20:56:41.686197 containerd[1586]: 2025-10-31 20:56:41.671 [INFO][4350] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Namespace="calico-system" Pod="calico-kube-controllers-bdff9fc5-g6ppj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" Oct 31 20:56:41.686250 containerd[1586]: 2025-10-31 20:56:41.674 [INFO][4350] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Namespace="calico-system" Pod="calico-kube-controllers-bdff9fc5-g6ppj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0", GenerateName:"calico-kube-controllers-bdff9fc5-", Namespace:"calico-system", SelfLink:"", UID:"74e39d71-e729-442a-ad78-d80f8756d7da", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bdff9fc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4", Pod:"calico-kube-controllers-bdff9fc5-g6ppj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7dc4e94b375", MAC:"c6:10:f6:3b:4d:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:41.686299 containerd[1586]: 2025-10-31 20:56:41.682 [INFO][4350] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" Namespace="calico-system" Pod="calico-kube-controllers-bdff9fc5-g6ppj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bdff9fc5--g6ppj-eth0" Oct 31 20:56:41.697343 kubelet[2732]: E1031 20:56:41.697228 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" podUID="bba5ef03-9e42-43a6-ab98-a0179f6b153f" Oct 31 20:56:41.697502 kubelet[2732]: E1031 20:56:41.697441 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:56:41.708238 containerd[1586]: time="2025-10-31T20:56:41.708197963Z" level=info msg="connecting to shim a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4" address="unix:///run/containerd/s/50b6b9f2aabb573d340045525fa2f6013593be2040e3e8481a9b4149e5f53229" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:41.738271 systemd[1]: Started cri-containerd-a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4.scope - libcontainer container a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4. Oct 31 20:56:41.751665 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 20:56:41.773679 containerd[1586]: time="2025-10-31T20:56:41.773636322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bdff9fc5-g6ppj,Uid:74e39d71-e729-442a-ad78-d80f8756d7da,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4aa470876dd6882012f8ff7d191d3af3efc29ed6c569750c3d42099cf73eea4\"" Oct 31 20:56:41.775346 containerd[1586]: time="2025-10-31T20:56:41.775310168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 20:56:41.998715 containerd[1586]: time="2025-10-31T20:56:41.998208877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:42.001995 containerd[1586]: time="2025-10-31T20:56:42.001777868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 20:56:42.001995 containerd[1586]: time="2025-10-31T20:56:42.001829273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:42.002161 kubelet[2732]: E1031 20:56:42.002034 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 20:56:42.002161 kubelet[2732]: E1031 20:56:42.002117 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 20:56:42.002313 kubelet[2732]: E1031 20:56:42.002255 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb7wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-bdff9fc5-g6ppj_calico-system(74e39d71-e729-442a-ad78-d80f8756d7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:42.003491 kubelet[2732]: E1031 20:56:42.003416 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" podUID="74e39d71-e729-442a-ad78-d80f8756d7da" Oct 31 20:56:42.128929 systemd-networkd[1504]: cali54f2d6b80be: Gained IPv6LL Oct 31 20:56:42.534519 kubelet[2732]: E1031 20:56:42.534473 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:42.535273 containerd[1586]: time="2025-10-31T20:56:42.534718836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd5bcf7c-q2hch,Uid:d8950dfd-888a-4512-a9c0-edda8417ecdd,Namespace:calico-apiserver,Attempt:0,}" Oct 31 20:56:42.535273 containerd[1586]: time="2025-10-31T20:56:42.535162998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4qcs2,Uid:4809a88d-1206-463c-b992-8852b18c726f,Namespace:kube-system,Attempt:0,}" Oct 31 20:56:42.640562 systemd-networkd[1504]: cali410e35a24e6: Gained IPv6LL Oct 31 20:56:42.671955 systemd-networkd[1504]: calicc50e9fed97: Link UP Oct 31 20:56:42.673062 systemd-networkd[1504]: calicc50e9fed97: Gained carrier Oct 31 20:56:42.687138 containerd[1586]: 2025-10-31 20:56:42.584 [INFO][4433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0 coredns-668d6bf9bc- kube-system 4809a88d-1206-463c-b992-8852b18c726f 872 0 2025-10-31 20:56:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4qcs2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc50e9fed97 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Namespace="kube-system" Pod="coredns-668d6bf9bc-4qcs2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4qcs2-" Oct 31 20:56:42.687138 containerd[1586]: 2025-10-31 20:56:42.585 [INFO][4433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Namespace="kube-system" Pod="coredns-668d6bf9bc-4qcs2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" Oct 31 20:56:42.687138 containerd[1586]: 2025-10-31 20:56:42.620 [INFO][4460] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" HandleID="k8s-pod-network.2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Workload="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.620 [INFO][4460] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" HandleID="k8s-pod-network.2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Workload="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4qcs2", "timestamp":"2025-10-31 20:56:42.620597467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.620 [INFO][4460] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.620 [INFO][4460] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.620 [INFO][4460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.634 [INFO][4460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" host="localhost" Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.640 [INFO][4460] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.645 [INFO][4460] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.647 [INFO][4460] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.650 [INFO][4460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:42.687616 containerd[1586]: 2025-10-31 20:56:42.650 [INFO][4460] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" host="localhost" Oct 31 20:56:42.687822 containerd[1586]: 2025-10-31 20:56:42.652 [INFO][4460] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd Oct 31 20:56:42.687822 containerd[1586]: 2025-10-31 20:56:42.656 [INFO][4460] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" host="localhost" Oct 31 20:56:42.687822 containerd[1586]: 2025-10-31 20:56:42.662 [INFO][4460] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" host="localhost" Oct 31 20:56:42.687822 containerd[1586]: 2025-10-31 20:56:42.662 [INFO][4460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" host="localhost" Oct 31 20:56:42.687822 containerd[1586]: 2025-10-31 20:56:42.662 [INFO][4460] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 20:56:42.687822 containerd[1586]: 2025-10-31 20:56:42.662 [INFO][4460] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" HandleID="k8s-pod-network.2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Workload="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" Oct 31 20:56:42.688024 containerd[1586]: 2025-10-31 20:56:42.665 [INFO][4433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Namespace="kube-system" Pod="coredns-668d6bf9bc-4qcs2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4809a88d-1206-463c-b992-8852b18c726f", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4qcs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc50e9fed97", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:42.688081 containerd[1586]: 2025-10-31 20:56:42.665 [INFO][4433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Namespace="kube-system" Pod="coredns-668d6bf9bc-4qcs2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" Oct 31 20:56:42.688081 containerd[1586]: 2025-10-31 20:56:42.665 [INFO][4433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc50e9fed97 ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Namespace="kube-system" Pod="coredns-668d6bf9bc-4qcs2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" Oct 31 20:56:42.688081 containerd[1586]: 2025-10-31 20:56:42.673 [INFO][4433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Namespace="kube-system" Pod="coredns-668d6bf9bc-4qcs2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" Oct 31 20:56:42.688169 containerd[1586]: 2025-10-31 20:56:42.673 [INFO][4433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Namespace="kube-system" Pod="coredns-668d6bf9bc-4qcs2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4809a88d-1206-463c-b992-8852b18c726f", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd", Pod:"coredns-668d6bf9bc-4qcs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc50e9fed97", MAC:"66:db:a9:81:5a:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:42.688169 containerd[1586]: 2025-10-31 20:56:42.684 [INFO][4433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" Namespace="kube-system" Pod="coredns-668d6bf9bc-4qcs2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4qcs2-eth0" Oct 31 20:56:42.702850 kubelet[2732]: E1031 20:56:42.702804 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" podUID="bba5ef03-9e42-43a6-ab98-a0179f6b153f" Oct 31 20:56:42.703195 kubelet[2732]: E1031 20:56:42.703163 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" podUID="74e39d71-e729-442a-ad78-d80f8756d7da" Oct 31 20:56:42.703909 kubelet[2732]: E1031 20:56:42.703848 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:56:42.725082 containerd[1586]: time="2025-10-31T20:56:42.725033925Z" level=info msg="connecting to shim 2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd" address="unix:///run/containerd/s/2cf7dc7be6caaa628b7e63b8332ffca9f841be7a5ef25e0d8aa8fbac8c700b78" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:42.783751 systemd-networkd[1504]: cali9645590fb71: Link UP Oct 31 20:56:42.784767 systemd-networkd[1504]: cali9645590fb71: Gained carrier Oct 31 20:56:42.789310 systemd[1]: Started cri-containerd-2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd.scope - libcontainer container 2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd. Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.591 [INFO][4431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0 calico-apiserver-7cfd5bcf7c- calico-apiserver d8950dfd-888a-4512-a9c0-edda8417ecdd 868 0 2025-10-31 20:56:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cfd5bcf7c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cfd5bcf7c-q2hch eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9645590fb71 [] [] }} ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-q2hch" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.591 [INFO][4431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-q2hch" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.628 [INFO][4466] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" HandleID="k8s-pod-network.d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Workload="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.629 [INFO][4466] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" HandleID="k8s-pod-network.d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Workload="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001375f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cfd5bcf7c-q2hch", "timestamp":"2025-10-31 20:56:42.628927269 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.629 [INFO][4466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.662 [INFO][4466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.663 [INFO][4466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.734 [INFO][4466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" host="localhost" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.747 [INFO][4466] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.755 [INFO][4466] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.757 [INFO][4466] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.760 [INFO][4466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.760 [INFO][4466] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" host="localhost" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.762 [INFO][4466] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44 Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.766 [INFO][4466] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" host="localhost" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.775 [INFO][4466] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" host="localhost" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.775 [INFO][4466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" host="localhost" Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.775 [INFO][4466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 20:56:42.804880 containerd[1586]: 2025-10-31 20:56:42.775 [INFO][4466] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" HandleID="k8s-pod-network.d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Workload="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" Oct 31 20:56:42.805402 containerd[1586]: 2025-10-31 20:56:42.778 [INFO][4431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-q2hch" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0", GenerateName:"calico-apiserver-7cfd5bcf7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8950dfd-888a-4512-a9c0-edda8417ecdd", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd5bcf7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cfd5bcf7c-q2hch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9645590fb71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:42.805402 containerd[1586]: 2025-10-31 20:56:42.779 [INFO][4431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-q2hch" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" Oct 31 20:56:42.805402 containerd[1586]: 2025-10-31 20:56:42.779 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9645590fb71 ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-q2hch" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" Oct 31 20:56:42.805402 containerd[1586]: 2025-10-31 20:56:42.786 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-q2hch" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" Oct 31 20:56:42.805402 containerd[1586]: 2025-10-31 20:56:42.787 [INFO][4431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-q2hch" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0", GenerateName:"calico-apiserver-7cfd5bcf7c-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8950dfd-888a-4512-a9c0-edda8417ecdd", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd5bcf7c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44", Pod:"calico-apiserver-7cfd5bcf7c-q2hch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9645590fb71", MAC:"be:b0:07:9f:7b:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:42.805402 containerd[1586]: 2025-10-31 20:56:42.799 [INFO][4431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd5bcf7c-q2hch" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd5bcf7c--q2hch-eth0" Oct 31 20:56:42.806515 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 20:56:42.838124 containerd[1586]: time="2025-10-31T20:56:42.837597766Z" level=info msg="connecting to shim d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44" address="unix:///run/containerd/s/cfd764913f42dba6d81d36c4e62bfb32fab0b67e2f434b62c690816f2b625473" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:42.851887 containerd[1586]: time="2025-10-31T20:56:42.851852539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4qcs2,Uid:4809a88d-1206-463c-b992-8852b18c726f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd\"" Oct 31 20:56:42.853067 kubelet[2732]: E1031 20:56:42.853038 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:42.856246 containerd[1586]: time="2025-10-31T20:56:42.856203198Z" level=info msg="CreateContainer within sandbox \"2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 20:56:42.866071 containerd[1586]: time="2025-10-31T20:56:42.866029344Z" level=info msg="Container 26f14771bb342a97c943bc4681270a922acba08f99958678b88bb76ede6feb08: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:56:42.868296 systemd[1]: Started cri-containerd-d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44.scope - libcontainer container d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44. Oct 31 20:56:42.872755 containerd[1586]: time="2025-10-31T20:56:42.872715628Z" level=info msg="CreateContainer within sandbox \"2edf04de96fddd69fc9fc66114d0fb12dc5f8087887346574794ded78a60fecd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26f14771bb342a97c943bc4681270a922acba08f99958678b88bb76ede6feb08\"" Oct 31 20:56:42.874811 containerd[1586]: time="2025-10-31T20:56:42.873343449Z" level=info msg="StartContainer for \"26f14771bb342a97c943bc4681270a922acba08f99958678b88bb76ede6feb08\"" Oct 31 20:56:42.874811 containerd[1586]: time="2025-10-31T20:56:42.874373468Z" level=info msg="connecting to shim 26f14771bb342a97c943bc4681270a922acba08f99958678b88bb76ede6feb08" address="unix:///run/containerd/s/2cf7dc7be6caaa628b7e63b8332ffca9f841be7a5ef25e0d8aa8fbac8c700b78" protocol=ttrpc version=3 Oct 31 20:56:42.878361 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:34240.service - OpenSSH per-connection server daemon (10.0.0.1:34240). Oct 31 20:56:42.900301 systemd[1]: Started cri-containerd-26f14771bb342a97c943bc4681270a922acba08f99958678b88bb76ede6feb08.scope - libcontainer container 26f14771bb342a97c943bc4681270a922acba08f99958678b88bb76ede6feb08. Oct 31 20:56:42.900881 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 20:56:42.939281 containerd[1586]: time="2025-10-31T20:56:42.939210552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd5bcf7c-q2hch,Uid:d8950dfd-888a-4512-a9c0-edda8417ecdd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d9aeb516b584ac852177ff243e428ee2b5be9bc0504ac4ce7cae6579168cdd44\"" Oct 31 20:56:42.942873 containerd[1586]: time="2025-10-31T20:56:42.941569860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 20:56:42.955300 containerd[1586]: time="2025-10-31T20:56:42.955265339Z" level=info msg="StartContainer for \"26f14771bb342a97c943bc4681270a922acba08f99958678b88bb76ede6feb08\" returns successfully" Oct 31 20:56:42.959593 sshd[4582]: Accepted publickey for core from 10.0.0.1 port 34240 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:42.961865 sshd-session[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:42.966078 systemd-logind[1572]: New session 9 of user core. Oct 31 20:56:42.972300 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 20:56:43.088209 systemd-networkd[1504]: cali7dc4e94b375: Gained IPv6LL Oct 31 20:56:43.118324 sshd[4616]: Connection closed by 10.0.0.1 port 34240 Oct 31 20:56:43.118667 sshd-session[4582]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:43.122779 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. Oct 31 20:56:43.123024 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:34240.service: Deactivated successfully. Oct 31 20:56:43.126652 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 20:56:43.128882 systemd-logind[1572]: Removed session 9. Oct 31 20:56:43.177272 containerd[1586]: time="2025-10-31T20:56:43.177226735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:43.178130 containerd[1586]: time="2025-10-31T20:56:43.178076615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 20:56:43.178247 containerd[1586]: time="2025-10-31T20:56:43.178104698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:43.178432 kubelet[2732]: E1031 20:56:43.178368 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 20:56:43.178512 kubelet[2732]: E1031 20:56:43.178448 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 20:56:43.178866 kubelet[2732]: E1031 20:56:43.178567 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ts2tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd5bcf7c-q2hch_calico-apiserver(d8950dfd-888a-4512-a9c0-edda8417ecdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:43.179768 kubelet[2732]: E1031 20:56:43.179723 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" podUID="d8950dfd-888a-4512-a9c0-edda8417ecdd" Oct 31 20:56:43.533679 kubelet[2732]: E1031 20:56:43.533626 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:43.535931 containerd[1586]: time="2025-10-31T20:56:43.535775810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nqh7g,Uid:b5fcba99-ac61-4506-9ea4-62f848c483c1,Namespace:kube-system,Attempt:0,}" Oct 31 20:56:43.662775 systemd-networkd[1504]: calib157bf58389: Link UP Oct 31 20:56:43.663758 systemd-networkd[1504]: calib157bf58389: Gained carrier Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.572 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0 coredns-668d6bf9bc- kube-system b5fcba99-ac61-4506-9ea4-62f848c483c1 863 0 2025-10-31 20:56:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-nqh7g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib157bf58389 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Namespace="kube-system" Pod="coredns-668d6bf9bc-nqh7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nqh7g-" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.572 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Namespace="kube-system" Pod="coredns-668d6bf9bc-nqh7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.607 [INFO][4655] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" HandleID="k8s-pod-network.df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Workload="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.607 [INFO][4655] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" HandleID="k8s-pod-network.df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Workload="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-nqh7g", "timestamp":"2025-10-31 20:56:43.607632943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.607 [INFO][4655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.607 [INFO][4655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.607 [INFO][4655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.620 [INFO][4655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" host="localhost" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.630 [INFO][4655] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.637 [INFO][4655] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.639 [INFO][4655] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.643 [INFO][4655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.643 [INFO][4655] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" host="localhost" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.645 [INFO][4655] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.651 [INFO][4655] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" host="localhost" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.658 [INFO][4655] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" host="localhost" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.658 [INFO][4655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" host="localhost" Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.658 [INFO][4655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 20:56:43.678715 containerd[1586]: 2025-10-31 20:56:43.658 [INFO][4655] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" HandleID="k8s-pod-network.df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Workload="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" Oct 31 20:56:43.679306 containerd[1586]: 2025-10-31 20:56:43.660 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Namespace="kube-system" Pod="coredns-668d6bf9bc-nqh7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b5fcba99-ac61-4506-9ea4-62f848c483c1", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-nqh7g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib157bf58389", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:43.679306 containerd[1586]: 2025-10-31 20:56:43.660 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Namespace="kube-system" Pod="coredns-668d6bf9bc-nqh7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" Oct 31 20:56:43.679306 containerd[1586]: 2025-10-31 20:56:43.660 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib157bf58389 ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Namespace="kube-system" Pod="coredns-668d6bf9bc-nqh7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" Oct 31 20:56:43.679306 containerd[1586]: 2025-10-31 20:56:43.664 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Namespace="kube-system" Pod="coredns-668d6bf9bc-nqh7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" Oct 31 20:56:43.679306 containerd[1586]: 2025-10-31 20:56:43.665 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Namespace="kube-system" Pod="coredns-668d6bf9bc-nqh7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b5fcba99-ac61-4506-9ea4-62f848c483c1", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b", Pod:"coredns-668d6bf9bc-nqh7g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib157bf58389", MAC:"fa:ac:30:31:85:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:43.679306 containerd[1586]: 2025-10-31 20:56:43.675 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" Namespace="kube-system" Pod="coredns-668d6bf9bc-nqh7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nqh7g-eth0" Oct 31 20:56:43.702751 containerd[1586]: time="2025-10-31T20:56:43.702689329Z" level=info msg="connecting to shim df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b" address="unix:///run/containerd/s/6c01829bf2feb63edee9f92f5add0de66ded157acb0d54790d3101304accacd6" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:43.708118 kubelet[2732]: E1031 20:56:43.708022 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" podUID="d8950dfd-888a-4512-a9c0-edda8417ecdd" Oct 31 20:56:43.709808 kubelet[2732]: E1031 20:56:43.709617 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:43.711168 kubelet[2732]: E1031 20:56:43.711117 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" podUID="74e39d71-e729-442a-ad78-d80f8756d7da" Oct 31 20:56:43.748330 systemd[1]: Started cri-containerd-df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b.scope - libcontainer container df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b. Oct 31 20:56:43.767151 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 20:56:43.773828 kubelet[2732]: I1031 20:56:43.773748 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4qcs2" podStartSLOduration=39.773728865 podStartE2EDuration="39.773728865s" podCreationTimestamp="2025-10-31 20:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 20:56:43.772585798 +0000 UTC m=+44.325884054" watchObservedRunningTime="2025-10-31 20:56:43.773728865 +0000 UTC m=+44.327027121" Oct 31 20:56:43.796603 containerd[1586]: time="2025-10-31T20:56:43.796482317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nqh7g,Uid:b5fcba99-ac61-4506-9ea4-62f848c483c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b\"" Oct 31 20:56:43.798612 kubelet[2732]: E1031 20:56:43.798583 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:43.800556 containerd[1586]: time="2025-10-31T20:56:43.800301595Z" level=info msg="CreateContainer within sandbox \"df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 20:56:43.812194 containerd[1586]: time="2025-10-31T20:56:43.812146985Z" level=info msg="Container 3c58cb7d9f80d33934d6a28cdbc7826546927663e879bb70a33d8d45a2c86372: CDI devices from CRI Config.CDIDevices: []" Oct 31 20:56:43.819372 containerd[1586]: time="2025-10-31T20:56:43.819329738Z" level=info msg="CreateContainer within sandbox \"df5e23849cd7279d3aa2282fa20b924f096c59ea7400284c9ebcc9911a9a626b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c58cb7d9f80d33934d6a28cdbc7826546927663e879bb70a33d8d45a2c86372\"" Oct 31 20:56:43.819935 containerd[1586]: time="2025-10-31T20:56:43.819845906Z" level=info msg="StartContainer for \"3c58cb7d9f80d33934d6a28cdbc7826546927663e879bb70a33d8d45a2c86372\"" Oct 31 20:56:43.821250 containerd[1586]: time="2025-10-31T20:56:43.821221275Z" level=info msg="connecting to shim 3c58cb7d9f80d33934d6a28cdbc7826546927663e879bb70a33d8d45a2c86372" address="unix:///run/containerd/s/6c01829bf2feb63edee9f92f5add0de66ded157acb0d54790d3101304accacd6" protocol=ttrpc version=3 Oct 31 20:56:43.843279 systemd[1]: Started cri-containerd-3c58cb7d9f80d33934d6a28cdbc7826546927663e879bb70a33d8d45a2c86372.scope - libcontainer container 3c58cb7d9f80d33934d6a28cdbc7826546927663e879bb70a33d8d45a2c86372. Oct 31 20:56:43.868530 containerd[1586]: time="2025-10-31T20:56:43.868374773Z" level=info msg="StartContainer for \"3c58cb7d9f80d33934d6a28cdbc7826546927663e879bb70a33d8d45a2c86372\" returns successfully" Oct 31 20:56:44.112262 systemd-networkd[1504]: cali9645590fb71: Gained IPv6LL Oct 31 20:56:44.534114 containerd[1586]: time="2025-10-31T20:56:44.534021872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bmt48,Uid:ef80b07d-34c2-483b-b1fb-77de41f9c304,Namespace:calico-system,Attempt:0,}" Oct 31 20:56:44.560272 systemd-networkd[1504]: calicc50e9fed97: Gained IPv6LL Oct 31 20:56:44.668417 systemd-networkd[1504]: calic4e6b9c9ef3: Link UP Oct 31 20:56:44.669392 systemd-networkd[1504]: calic4e6b9c9ef3: Gained carrier Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.583 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--bmt48-eth0 goldmane-666569f655- calico-system ef80b07d-34c2-483b-b1fb-77de41f9c304 874 0 2025-10-31 20:56:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-bmt48 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic4e6b9c9ef3 [] [] }} ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Namespace="calico-system" Pod="goldmane-666569f655-bmt48" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bmt48-" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.583 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Namespace="calico-system" Pod="goldmane-666569f655-bmt48" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bmt48-eth0" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.614 [INFO][4775] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" HandleID="k8s-pod-network.01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Workload="localhost-k8s-goldmane--666569f655--bmt48-eth0" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.614 [INFO][4775] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" HandleID="k8s-pod-network.01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Workload="localhost-k8s-goldmane--666569f655--bmt48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001374b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-bmt48", "timestamp":"2025-10-31 20:56:44.614210583 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.614 [INFO][4775] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.614 [INFO][4775] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.614 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.626 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" host="localhost" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.631 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.636 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.641 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.643 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.644 [INFO][4775] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" host="localhost" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.647 [INFO][4775] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426 Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.651 [INFO][4775] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" host="localhost" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.663 [INFO][4775] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" host="localhost" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.663 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" host="localhost" Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.663 [INFO][4775] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 20:56:44.702879 containerd[1586]: 2025-10-31 20:56:44.663 [INFO][4775] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" HandleID="k8s-pod-network.01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Workload="localhost-k8s-goldmane--666569f655--bmt48-eth0" Oct 31 20:56:44.703778 containerd[1586]: 2025-10-31 20:56:44.665 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Namespace="calico-system" Pod="goldmane-666569f655-bmt48" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bmt48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bmt48-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ef80b07d-34c2-483b-b1fb-77de41f9c304", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-bmt48", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic4e6b9c9ef3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:44.703778 containerd[1586]: 2025-10-31 20:56:44.666 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Namespace="calico-system" Pod="goldmane-666569f655-bmt48" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bmt48-eth0" Oct 31 20:56:44.703778 containerd[1586]: 2025-10-31 20:56:44.666 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4e6b9c9ef3 ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Namespace="calico-system" Pod="goldmane-666569f655-bmt48" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bmt48-eth0" Oct 31 20:56:44.703778 containerd[1586]: 2025-10-31 20:56:44.670 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Namespace="calico-system" Pod="goldmane-666569f655-bmt48" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bmt48-eth0" Oct 31 20:56:44.703778 containerd[1586]: 2025-10-31 20:56:44.672 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Namespace="calico-system" Pod="goldmane-666569f655-bmt48" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bmt48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bmt48-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ef80b07d-34c2-483b-b1fb-77de41f9c304", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 20, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426", Pod:"goldmane-666569f655-bmt48", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic4e6b9c9ef3", MAC:"16:61:dd:40:d1:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 20:56:44.703778 containerd[1586]: 2025-10-31 20:56:44.700 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" Namespace="calico-system" Pod="goldmane-666569f655-bmt48" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bmt48-eth0" Oct 31 20:56:44.715506 kubelet[2732]: E1031 20:56:44.715466 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:44.717652 kubelet[2732]: E1031 20:56:44.717330 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:44.718536 kubelet[2732]: E1031 20:56:44.718444 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" podUID="d8950dfd-888a-4512-a9c0-edda8417ecdd" Oct 31 20:56:44.733055 kubelet[2732]: I1031 20:56:44.732786 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nqh7g" podStartSLOduration=40.732766631 podStartE2EDuration="40.732766631s" podCreationTimestamp="2025-10-31 20:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 20:56:44.731797102 +0000 UTC m=+45.285095358" watchObservedRunningTime="2025-10-31 20:56:44.732766631 +0000 UTC m=+45.286064887" Oct 31 20:56:44.753320 systemd-networkd[1504]: calib157bf58389: Gained IPv6LL Oct 31 20:56:44.754590 containerd[1586]: time="2025-10-31T20:56:44.754525174Z" level=info msg="connecting to shim 01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426" address="unix:///run/containerd/s/e8ee3302b4d93e991304bd37debecd84db033e47dfe4e8d7ab14871619259de4" namespace=k8s.io protocol=ttrpc version=3 Oct 31 20:56:44.783368 systemd[1]: Started cri-containerd-01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426.scope - libcontainer container 01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426. Oct 31 20:56:44.799144 systemd-resolved[1282]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 20:56:44.820766 containerd[1586]: time="2025-10-31T20:56:44.820727129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bmt48,Uid:ef80b07d-34c2-483b-b1fb-77de41f9c304,Namespace:calico-system,Attempt:0,} returns sandbox id \"01b2a01583114eb4e4cb23c4bfb3d6b6d4b0d29efb430bfce98c8641b1966426\"" Oct 31 20:56:44.822396 containerd[1586]: time="2025-10-31T20:56:44.822368719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 20:56:45.031627 containerd[1586]: time="2025-10-31T20:56:45.031565318Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:45.032630 containerd[1586]: time="2025-10-31T20:56:45.032586369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 20:56:45.032731 containerd[1586]: time="2025-10-31T20:56:45.032673577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:45.032904 kubelet[2732]: E1031 20:56:45.032847 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 20:56:45.032954 kubelet[2732]: E1031 20:56:45.032922 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 20:56:45.033333 kubelet[2732]: E1031 20:56:45.033055 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zk5j8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bmt48_calico-system(ef80b07d-34c2-483b-b1fb-77de41f9c304): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:45.035365 kubelet[2732]: E1031 20:56:45.035246 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bmt48" podUID="ef80b07d-34c2-483b-b1fb-77de41f9c304" Oct 31 20:56:45.721335 kubelet[2732]: E1031 20:56:45.721164 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:45.721664 kubelet[2732]: E1031 20:56:45.721567 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:45.722737 kubelet[2732]: E1031 20:56:45.722701 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bmt48" podUID="ef80b07d-34c2-483b-b1fb-77de41f9c304" Oct 31 20:56:46.672272 systemd-networkd[1504]: calic4e6b9c9ef3: Gained IPv6LL Oct 31 20:56:46.723179 kubelet[2732]: E1031 20:56:46.723143 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:46.723991 kubelet[2732]: E1031 20:56:46.723526 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bmt48" podUID="ef80b07d-34c2-483b-b1fb-77de41f9c304" Oct 31 20:56:47.403835 kubelet[2732]: I1031 20:56:47.403540 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 20:56:47.404061 kubelet[2732]: E1031 20:56:47.404025 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:47.725608 kubelet[2732]: E1031 20:56:47.725509 2732 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 20:56:48.132165 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:34248.service - OpenSSH per-connection server daemon (10.0.0.1:34248). Oct 31 20:56:48.198346 sshd[4903]: Accepted publickey for core from 10.0.0.1 port 34248 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:48.199670 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:48.204896 systemd-logind[1572]: New session 10 of user core. Oct 31 20:56:48.218292 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 20:56:48.320125 sshd[4906]: Connection closed by 10.0.0.1 port 34248 Oct 31 20:56:48.321380 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:48.333264 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:34248.service: Deactivated successfully. Oct 31 20:56:48.335196 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 20:56:48.335901 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. Oct 31 20:56:48.338429 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:34260.service - OpenSSH per-connection server daemon (10.0.0.1:34260). Oct 31 20:56:48.338908 systemd-logind[1572]: Removed session 10. Oct 31 20:56:48.394515 sshd[4920]: Accepted publickey for core from 10.0.0.1 port 34260 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:48.396363 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:48.400232 systemd-logind[1572]: New session 11 of user core. Oct 31 20:56:48.409267 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 20:56:48.561787 sshd[4923]: Connection closed by 10.0.0.1 port 34260 Oct 31 20:56:48.563130 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:48.572963 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:34260.service: Deactivated successfully. Oct 31 20:56:48.576917 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 20:56:48.578283 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. Oct 31 20:56:48.583114 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:34274.service - OpenSSH per-connection server daemon (10.0.0.1:34274). Oct 31 20:56:48.584111 systemd-logind[1572]: Removed session 11. Oct 31 20:56:48.644238 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 34274 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:48.645583 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:48.650176 systemd-logind[1572]: New session 12 of user core. Oct 31 20:56:48.657243 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 20:56:48.784850 sshd[4938]: Connection closed by 10.0.0.1 port 34274 Oct 31 20:56:48.785212 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:48.790336 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:34274.service: Deactivated successfully. Oct 31 20:56:48.793635 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 20:56:48.795065 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. Oct 31 20:56:48.796987 systemd-logind[1572]: Removed session 12. Oct 31 20:56:51.535338 containerd[1586]: time="2025-10-31T20:56:51.535305230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 20:56:51.739929 containerd[1586]: time="2025-10-31T20:56:51.739826395Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:51.740852 containerd[1586]: time="2025-10-31T20:56:51.740818510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 20:56:51.740920 containerd[1586]: time="2025-10-31T20:56:51.740844912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:51.741020 kubelet[2732]: E1031 20:56:51.740984 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 20:56:51.741339 kubelet[2732]: E1031 20:56:51.741030 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 20:56:51.741592 kubelet[2732]: E1031 20:56:51.741165 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f1d5cb8199114639b2814101b4b71df3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5fh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f84b54649-5z922_calico-system(96bbfa9f-5f82-4712-bb67-615baa536087): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:51.743487 containerd[1586]: time="2025-10-31T20:56:51.743453829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 20:56:51.963691 containerd[1586]: time="2025-10-31T20:56:51.963560731Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:51.964545 containerd[1586]: time="2025-10-31T20:56:51.964501002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 20:56:51.964612 containerd[1586]: time="2025-10-31T20:56:51.964561327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:51.964772 kubelet[2732]: E1031 20:56:51.964725 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 20:56:51.964812 kubelet[2732]: E1031 20:56:51.964783 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 20:56:51.965283 kubelet[2732]: E1031 20:56:51.964921 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5fh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f84b54649-5z922_calico-system(96bbfa9f-5f82-4712-bb67-615baa536087): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:51.966140 kubelet[2732]: E1031 20:56:51.966108 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f84b54649-5z922" podUID="96bbfa9f-5f82-4712-bb67-615baa536087" Oct 31 20:56:53.810283 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:39524.service - OpenSSH per-connection server daemon (10.0.0.1:39524). Oct 31 20:56:53.860309 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 39524 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:53.861540 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:53.864857 systemd-logind[1572]: New session 13 of user core. Oct 31 20:56:53.879240 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 20:56:53.992894 sshd[4966]: Connection closed by 10.0.0.1 port 39524 Oct 31 20:56:53.993242 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:54.005568 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:39524.service: Deactivated successfully. Oct 31 20:56:54.007719 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 20:56:54.008792 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. Oct 31 20:56:54.011544 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:39532.service - OpenSSH per-connection server daemon (10.0.0.1:39532). Oct 31 20:56:54.013747 systemd-logind[1572]: Removed session 13. Oct 31 20:56:54.078673 sshd[4979]: Accepted publickey for core from 10.0.0.1 port 39532 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:54.080009 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:54.084056 systemd-logind[1572]: New session 14 of user core. Oct 31 20:56:54.093239 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 20:56:54.244059 sshd[4982]: Connection closed by 10.0.0.1 port 39532 Oct 31 20:56:54.244316 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:54.259021 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:39532.service: Deactivated successfully. Oct 31 20:56:54.260624 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 20:56:54.262349 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. Oct 31 20:56:54.265257 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:39536.service - OpenSSH per-connection server daemon (10.0.0.1:39536). Oct 31 20:56:54.266185 systemd-logind[1572]: Removed session 14. Oct 31 20:56:54.322986 sshd[4994]: Accepted publickey for core from 10.0.0.1 port 39536 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:54.324035 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:54.327837 systemd-logind[1572]: New session 15 of user core. Oct 31 20:56:54.337218 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 20:56:54.885059 sshd[4997]: Connection closed by 10.0.0.1 port 39536 Oct 31 20:56:54.885787 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:54.895273 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:39536.service: Deactivated successfully. Oct 31 20:56:54.897258 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 20:56:54.900326 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. Oct 31 20:56:54.905374 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:39552.service - OpenSSH per-connection server daemon (10.0.0.1:39552). Oct 31 20:56:54.906636 systemd-logind[1572]: Removed session 15. Oct 31 20:56:54.962148 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 39552 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:54.963364 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:54.967616 systemd-logind[1572]: New session 16 of user core. Oct 31 20:56:54.975265 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 20:56:55.219464 sshd[5024]: Connection closed by 10.0.0.1 port 39552 Oct 31 20:56:55.217736 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:55.227867 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:39552.service: Deactivated successfully. Oct 31 20:56:55.230123 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 20:56:55.232263 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. Oct 31 20:56:55.235392 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:39560.service - OpenSSH per-connection server daemon (10.0.0.1:39560). Oct 31 20:56:55.236073 systemd-logind[1572]: Removed session 16. Oct 31 20:56:55.291981 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 39560 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:56:55.293336 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:56:55.297934 systemd-logind[1572]: New session 17 of user core. Oct 31 20:56:55.308252 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 20:56:55.392588 sshd[5039]: Connection closed by 10.0.0.1 port 39560 Oct 31 20:56:55.392903 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Oct 31 20:56:55.396122 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:39560.service: Deactivated successfully. Oct 31 20:56:55.398187 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 20:56:55.400555 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. Oct 31 20:56:55.401602 systemd-logind[1572]: Removed session 17. Oct 31 20:56:56.535080 containerd[1586]: time="2025-10-31T20:56:56.534727176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 20:56:56.753324 containerd[1586]: time="2025-10-31T20:56:56.753140215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:56.754233 containerd[1586]: time="2025-10-31T20:56:56.754080237Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 20:56:56.754233 containerd[1586]: time="2025-10-31T20:56:56.754161883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:56.754890 kubelet[2732]: E1031 20:56:56.754427 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 20:56:56.754890 kubelet[2732]: E1031 20:56:56.754470 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 20:56:56.754890 kubelet[2732]: E1031 20:56:56.754568 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kjzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fzzpl_calico-system(85524551-e531-4ebd-be44-e40fd94305ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:56.757876 containerd[1586]: time="2025-10-31T20:56:56.757358735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 20:56:56.956923 containerd[1586]: time="2025-10-31T20:56:56.956798356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:56.958007 containerd[1586]: time="2025-10-31T20:56:56.957891428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 20:56:56.958007 containerd[1586]: time="2025-10-31T20:56:56.957931351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:56.958162 kubelet[2732]: E1031 20:56:56.958103 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 20:56:56.958162 kubelet[2732]: E1031 20:56:56.958147 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 20:56:56.958411 kubelet[2732]: E1031 20:56:56.958247 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kjzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fzzpl_calico-system(85524551-e531-4ebd-be44-e40fd94305ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:56.959823 kubelet[2732]: E1031 20:56:56.959776 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:56:57.535704 containerd[1586]: time="2025-10-31T20:56:57.535594910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 20:56:57.733594 containerd[1586]: time="2025-10-31T20:56:57.733529939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:57.734531 containerd[1586]: time="2025-10-31T20:56:57.734450519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 20:56:57.734811 kubelet[2732]: E1031 20:56:57.734743 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 20:56:57.734811 kubelet[2732]: E1031 20:56:57.734803 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 20:56:57.735141 kubelet[2732]: E1031 20:56:57.735045 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqvpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd5bcf7c-d7m9d_calico-apiserver(bba5ef03-9e42-43a6-ab98-a0179f6b153f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:57.736675 containerd[1586]: time="2025-10-31T20:56:57.734612849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:57.736675 containerd[1586]: time="2025-10-31T20:56:57.735606474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 20:56:57.737108 kubelet[2732]: E1031 20:56:57.736997 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" podUID="bba5ef03-9e42-43a6-ab98-a0179f6b153f" Oct 31 20:56:57.945933 containerd[1586]: time="2025-10-31T20:56:57.945813416Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:57.946955 containerd[1586]: time="2025-10-31T20:56:57.946918568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 20:56:57.947036 containerd[1586]: time="2025-10-31T20:56:57.947002453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:57.947458 kubelet[2732]: E1031 20:56:57.947215 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 20:56:57.947458 kubelet[2732]: E1031 20:56:57.947266 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 20:56:57.947458 kubelet[2732]: E1031 20:56:57.947397 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ts2tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd5bcf7c-q2hch_calico-apiserver(d8950dfd-888a-4512-a9c0-edda8417ecdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:57.948756 kubelet[2732]: E1031 20:56:57.948709 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" podUID="d8950dfd-888a-4512-a9c0-edda8417ecdd" Oct 31 20:56:58.535361 containerd[1586]: time="2025-10-31T20:56:58.535302559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 20:56:58.754121 containerd[1586]: time="2025-10-31T20:56:58.754031937Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:58.755392 containerd[1586]: time="2025-10-31T20:56:58.755314018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 20:56:58.755392 containerd[1586]: time="2025-10-31T20:56:58.755357181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:58.755588 kubelet[2732]: E1031 20:56:58.755495 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 20:56:58.755588 kubelet[2732]: E1031 20:56:58.755533 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 20:56:58.756161 kubelet[2732]: E1031 20:56:58.755743 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zk5j8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bmt48_calico-system(ef80b07d-34c2-483b-b1fb-77de41f9c304): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:58.757281 containerd[1586]: time="2025-10-31T20:56:58.755807969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 20:56:58.757334 kubelet[2732]: E1031 20:56:58.757288 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bmt48" podUID="ef80b07d-34c2-483b-b1fb-77de41f9c304" Oct 31 20:56:58.980687 containerd[1586]: time="2025-10-31T20:56:58.980467201Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 20:56:58.981892 containerd[1586]: time="2025-10-31T20:56:58.981836767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 20:56:58.981961 containerd[1586]: time="2025-10-31T20:56:58.981867569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Oct 31 20:56:58.982120 kubelet[2732]: E1031 20:56:58.982061 2732 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 20:56:58.982426 kubelet[2732]: E1031 20:56:58.982131 2732 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 20:56:58.983373 kubelet[2732]: E1031 20:56:58.982605 2732 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb7wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-bdff9fc5-g6ppj_calico-system(74e39d71-e729-442a-ad78-d80f8756d7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 20:56:58.983912 kubelet[2732]: E1031 20:56:58.983872 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" podUID="74e39d71-e729-442a-ad78-d80f8756d7da" Oct 31 20:57:00.404276 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:60862.service - OpenSSH per-connection server daemon (10.0.0.1:60862). Oct 31 20:57:00.478267 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 60862 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:57:00.480241 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:57:00.484946 systemd-logind[1572]: New session 18 of user core. Oct 31 20:57:00.499268 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 20:57:00.577412 sshd[5065]: Connection closed by 10.0.0.1 port 60862 Oct 31 20:57:00.577750 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Oct 31 20:57:00.582055 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:60862.service: Deactivated successfully. Oct 31 20:57:00.584334 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 20:57:00.585574 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. Oct 31 20:57:00.587270 systemd-logind[1572]: Removed session 18. Oct 31 20:57:05.591835 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:60878.service - OpenSSH per-connection server daemon (10.0.0.1:60878). Oct 31 20:57:05.656606 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 60878 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:57:05.657643 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:57:05.661312 systemd-logind[1572]: New session 19 of user core. Oct 31 20:57:05.672235 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 20:57:05.745942 sshd[5087]: Connection closed by 10.0.0.1 port 60878 Oct 31 20:57:05.746245 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Oct 31 20:57:05.749710 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:60878.service: Deactivated successfully. Oct 31 20:57:05.752220 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 20:57:05.752972 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. Oct 31 20:57:05.753885 systemd-logind[1572]: Removed session 19. Oct 31 20:57:06.535455 kubelet[2732]: E1031 20:57:06.535296 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f84b54649-5z922" podUID="96bbfa9f-5f82-4712-bb67-615baa536087" Oct 31 20:57:09.535597 kubelet[2732]: E1031 20:57:09.535538 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-q2hch" podUID="d8950dfd-888a-4512-a9c0-edda8417ecdd" Oct 31 20:57:09.537065 kubelet[2732]: E1031 20:57:09.535725 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd5bcf7c-d7m9d" podUID="bba5ef03-9e42-43a6-ab98-a0179f6b153f" Oct 31 20:57:10.769303 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:52944.service - OpenSSH per-connection server daemon (10.0.0.1:52944). Oct 31 20:57:10.815916 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 52944 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:57:10.817401 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:57:10.823766 systemd-logind[1572]: New session 20 of user core. Oct 31 20:57:10.829306 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 20:57:10.910686 sshd[5105]: Connection closed by 10.0.0.1 port 52944 Oct 31 20:57:10.911007 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Oct 31 20:57:10.914564 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:52944.service: Deactivated successfully. Oct 31 20:57:10.916403 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 20:57:10.917173 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. Oct 31 20:57:10.918019 systemd-logind[1572]: Removed session 20. Oct 31 20:57:11.535808 kubelet[2732]: E1031 20:57:11.535676 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fzzpl" podUID="85524551-e531-4ebd-be44-e40fd94305ba" Oct 31 20:57:12.534563 kubelet[2732]: E1031 20:57:12.534506 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bmt48" podUID="ef80b07d-34c2-483b-b1fb-77de41f9c304" Oct 31 20:57:14.534914 kubelet[2732]: E1031 20:57:14.534868 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bdff9fc5-g6ppj" podUID="74e39d71-e729-442a-ad78-d80f8756d7da" Oct 31 20:57:15.922484 systemd[1]: Started sshd@20-10.0.0.70:22-10.0.0.1:52948.service - OpenSSH per-connection server daemon (10.0.0.1:52948). Oct 31 20:57:15.990450 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 52948 ssh2: RSA SHA256:Wql/+blyQT6WvPgJ2iMKpOgFFB4GHOVuKpE0zRiEHIg Oct 31 20:57:15.991970 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 20:57:15.995768 systemd-logind[1572]: New session 21 of user core. Oct 31 20:57:16.008276 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 20:57:16.096221 sshd[5123]: Connection closed by 10.0.0.1 port 52948 Oct 31 20:57:16.096577 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Oct 31 20:57:16.100404 systemd[1]: sshd@20-10.0.0.70:22-10.0.0.1:52948.service: Deactivated successfully. Oct 31 20:57:16.102242 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 20:57:16.102912 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. Oct 31 20:57:16.103824 systemd-logind[1572]: Removed session 21.