Oct 28 23:14:46.328827 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 28 23:14:46.328852 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Oct 28 21:26:42 -00 2025 Oct 28 23:14:46.328860 kernel: KASLR enabled Oct 28 23:14:46.328866 kernel: efi: EFI v2.7 by EDK II Oct 28 23:14:46.328872 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 28 23:14:46.328878 kernel: random: crng init done Oct 28 23:14:46.328886 kernel: secureboot: Secure boot disabled Oct 28 23:14:46.328892 kernel: ACPI: Early table checksum verification disabled Oct 28 23:14:46.328899 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 28 23:14:46.328906 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 28 23:14:46.328912 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:14:46.328918 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:14:46.328924 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:14:46.328930 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:14:46.328939 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:14:46.328946 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:14:46.328952 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:14:46.328959 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:14:46.328965 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:14:46.328972 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 28 23:14:46.328978 kernel: ACPI: Use ACPI SPCR as default console: No Oct 28 23:14:46.328985 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 28 23:14:46.328993 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 28 23:14:46.329000 kernel: Zone ranges: Oct 28 23:14:46.329006 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 28 23:14:46.329013 kernel: DMA32 empty Oct 28 23:14:46.329019 kernel: Normal empty Oct 28 23:14:46.329026 kernel: Device empty Oct 28 23:14:46.329032 kernel: Movable zone start for each node Oct 28 23:14:46.329038 kernel: Early memory node ranges Oct 28 23:14:46.329045 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 28 23:14:46.329052 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 28 23:14:46.329058 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 28 23:14:46.329065 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 28 23:14:46.329073 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 28 23:14:46.329080 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 28 23:14:46.329086 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 28 23:14:46.329092 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 28 23:14:46.329099 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 28 23:14:46.329105 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 28 23:14:46.329116 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 28 23:14:46.329123 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 28 23:14:46.329130 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 28 23:14:46.329137 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 28 23:14:46.329144 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 28 23:14:46.329151 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 28 23:14:46.329158 kernel: psci: probing for conduit method from ACPI. Oct 28 23:14:46.329165 kernel: psci: PSCIv1.1 detected in firmware. Oct 28 23:14:46.329196 kernel: psci: Using standard PSCI v0.2 function IDs Oct 28 23:14:46.329204 kernel: psci: Trusted OS migration not required Oct 28 23:14:46.329211 kernel: psci: SMC Calling Convention v1.1 Oct 28 23:14:46.329217 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 28 23:14:46.329224 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 28 23:14:46.329232 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 28 23:14:46.329239 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 28 23:14:46.329246 kernel: Detected PIPT I-cache on CPU0 Oct 28 23:14:46.329253 kernel: CPU features: detected: GIC system register CPU interface Oct 28 23:14:46.329260 kernel: CPU features: detected: Spectre-v4 Oct 28 23:14:46.329267 kernel: CPU features: detected: Spectre-BHB Oct 28 23:14:46.329276 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 28 23:14:46.329283 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 28 23:14:46.329290 kernel: CPU features: detected: ARM erratum 1418040 Oct 28 23:14:46.329297 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 28 23:14:46.329304 kernel: alternatives: applying boot alternatives Oct 28 23:14:46.329312 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d4a291c245609e5c237181e704ec1c7ec0a6d72eca92291e03117b7440b9f526 Oct 28 23:14:46.329319 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 28 23:14:46.329741 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 28 23:14:46.329752 kernel: Fallback order for Node 0: 0 Oct 28 23:14:46.329759 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 28 23:14:46.329770 kernel: Policy zone: DMA Oct 28 23:14:46.329777 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 28 23:14:46.329784 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 28 23:14:46.329791 kernel: software IO TLB: area num 4. Oct 28 23:14:46.329798 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 28 23:14:46.329805 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 28 23:14:46.329811 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 28 23:14:46.329818 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 28 23:14:46.329825 kernel: rcu: RCU event tracing is enabled. Oct 28 23:14:46.329833 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 28 23:14:46.329839 kernel: Trampoline variant of Tasks RCU enabled. Oct 28 23:14:46.329848 kernel: Tracing variant of Tasks RCU enabled. Oct 28 23:14:46.329855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 28 23:14:46.329862 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 28 23:14:46.329868 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 23:14:46.329875 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 23:14:46.329882 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 28 23:14:46.329889 kernel: GICv3: 256 SPIs implemented Oct 28 23:14:46.329896 kernel: GICv3: 0 Extended SPIs implemented Oct 28 23:14:46.329903 kernel: Root IRQ handler: gic_handle_irq Oct 28 23:14:46.329909 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 28 23:14:46.329916 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 28 23:14:46.329924 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 28 23:14:46.329931 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 28 23:14:46.329938 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 28 23:14:46.329945 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 28 23:14:46.329952 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 28 23:14:46.329959 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 28 23:14:46.329966 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 28 23:14:46.329973 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:14:46.329980 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 28 23:14:46.329987 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 28 23:14:46.329993 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 28 23:14:46.330002 kernel: arm-pv: using stolen time PV Oct 28 23:14:46.330009 kernel: Console: colour dummy device 80x25 Oct 28 23:14:46.330017 kernel: ACPI: Core revision 20240827 Oct 28 23:14:46.330024 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 28 23:14:46.330031 kernel: pid_max: default: 32768 minimum: 301 Oct 28 23:14:46.330038 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 28 23:14:46.330045 kernel: landlock: Up and running. Oct 28 23:14:46.330052 kernel: SELinux: Initializing. Oct 28 23:14:46.330061 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 23:14:46.330068 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 23:14:46.330075 kernel: rcu: Hierarchical SRCU implementation. Oct 28 23:14:46.330082 kernel: rcu: Max phase no-delay instances is 400. Oct 28 23:14:46.330090 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 28 23:14:46.330097 kernel: Remapping and enabling EFI services. Oct 28 23:14:46.330104 kernel: smp: Bringing up secondary CPUs ... Oct 28 23:14:46.330113 kernel: Detected PIPT I-cache on CPU1 Oct 28 23:14:46.330124 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 28 23:14:46.330133 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 28 23:14:46.330140 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:14:46.330147 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 28 23:14:46.330155 kernel: Detected PIPT I-cache on CPU2 Oct 28 23:14:46.330162 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 28 23:14:46.330185 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 28 23:14:46.330193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:14:46.330201 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 28 23:14:46.330208 kernel: Detected PIPT I-cache on CPU3 Oct 28 23:14:46.330216 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 28 23:14:46.330223 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 28 23:14:46.330231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:14:46.330240 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 28 23:14:46.330248 kernel: smp: Brought up 1 node, 4 CPUs Oct 28 23:14:46.330255 kernel: SMP: Total of 4 processors activated. Oct 28 23:14:46.330262 kernel: CPU: All CPU(s) started at EL1 Oct 28 23:14:46.330270 kernel: CPU features: detected: 32-bit EL0 Support Oct 28 23:14:46.330277 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 28 23:14:46.330285 kernel: CPU features: detected: Common not Private translations Oct 28 23:14:46.330294 kernel: CPU features: detected: CRC32 instructions Oct 28 23:14:46.330301 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 28 23:14:46.330309 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 28 23:14:46.330316 kernel: CPU features: detected: LSE atomic instructions Oct 28 23:14:46.330324 kernel: CPU features: detected: Privileged Access Never Oct 28 23:14:46.330331 kernel: CPU features: detected: RAS Extension Support Oct 28 23:14:46.330339 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 28 23:14:46.330346 kernel: alternatives: applying system-wide alternatives Oct 28 23:14:46.330355 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 28 23:14:46.330364 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Oct 28 23:14:46.330371 kernel: devtmpfs: initialized Oct 28 23:14:46.330379 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 28 23:14:46.330386 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 28 23:14:46.330394 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 28 23:14:46.330401 kernel: 0 pages in range for non-PLT usage Oct 28 23:14:46.330410 kernel: 515056 pages in range for PLT usage Oct 28 23:14:46.330417 kernel: pinctrl core: initialized pinctrl subsystem Oct 28 23:14:46.330424 kernel: SMBIOS 3.0.0 present. Oct 28 23:14:46.330432 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 28 23:14:46.330440 kernel: DMI: Memory slots populated: 1/1 Oct 28 23:14:46.330447 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 28 23:14:46.330454 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 28 23:14:46.330464 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 28 23:14:46.330471 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 28 23:14:46.330479 kernel: audit: initializing netlink subsys (disabled) Oct 28 23:14:46.330486 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Oct 28 23:14:46.330494 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 28 23:14:46.330501 kernel: cpuidle: using governor menu Oct 28 23:14:46.330509 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 28 23:14:46.330517 kernel: ASID allocator initialised with 32768 entries Oct 28 23:14:46.330525 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 28 23:14:46.330532 kernel: Serial: AMBA PL011 UART driver Oct 28 23:14:46.330540 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 28 23:14:46.330547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 28 23:14:46.330555 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 28 23:14:46.330563 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 28 23:14:46.330570 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 28 23:14:46.330579 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 28 23:14:46.330586 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 28 23:14:46.330593 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 28 23:14:46.330601 kernel: ACPI: Added _OSI(Module Device) Oct 28 23:14:46.330608 kernel: ACPI: Added _OSI(Processor Device) Oct 28 23:14:46.330616 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 28 23:14:46.330623 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 28 23:14:46.330632 kernel: ACPI: Interpreter enabled Oct 28 23:14:46.330639 kernel: ACPI: Using GIC for interrupt routing Oct 28 23:14:46.330647 kernel: ACPI: MCFG table detected, 1 entries Oct 28 23:14:46.330654 kernel: ACPI: CPU0 has been hot-added Oct 28 23:14:46.330661 kernel: ACPI: CPU1 has been hot-added Oct 28 23:14:46.330669 kernel: ACPI: CPU2 has been hot-added Oct 28 23:14:46.330676 kernel: ACPI: CPU3 has been hot-added Oct 28 23:14:46.330683 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 28 23:14:46.330700 kernel: printk: legacy console [ttyAMA0] enabled Oct 28 23:14:46.330708 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 28 23:14:46.330867 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 28 23:14:46.330953 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 28 23:14:46.331033 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 28 23:14:46.331115 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 28 23:14:46.331215 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 28 23:14:46.331226 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 28 23:14:46.331238 kernel: PCI host bridge to bus 0000:00 Oct 28 23:14:46.331371 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 28 23:14:46.331449 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 28 23:14:46.331523 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 28 23:14:46.331596 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 28 23:14:46.331702 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 28 23:14:46.331802 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 28 23:14:46.331895 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 28 23:14:46.331981 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 28 23:14:46.332076 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 28 23:14:46.332160 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 28 23:14:46.332258 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 28 23:14:46.332340 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 28 23:14:46.332417 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 28 23:14:46.332492 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 28 23:14:46.332575 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 28 23:14:46.332585 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 28 23:14:46.332593 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 28 23:14:46.332601 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 28 23:14:46.332609 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 28 23:14:46.332617 kernel: iommu: Default domain type: Translated Oct 28 23:14:46.332627 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 28 23:14:46.332635 kernel: efivars: Registered efivars operations Oct 28 23:14:46.332643 kernel: vgaarb: loaded Oct 28 23:14:46.332651 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 28 23:14:46.332658 kernel: VFS: Disk quotas dquot_6.6.0 Oct 28 23:14:46.332667 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 28 23:14:46.332675 kernel: pnp: PnP ACPI init Oct 28 23:14:46.332779 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 28 23:14:46.332791 kernel: pnp: PnP ACPI: found 1 devices Oct 28 23:14:46.332799 kernel: NET: Registered PF_INET protocol family Oct 28 23:14:46.332807 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 28 23:14:46.332815 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 28 23:14:46.332822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 28 23:14:46.332830 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 28 23:14:46.332840 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 28 23:14:46.332848 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 28 23:14:46.332856 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 23:14:46.332864 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 23:14:46.332872 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 28 23:14:46.332880 kernel: PCI: CLS 0 bytes, default 64 Oct 28 23:14:46.332887 kernel: kvm [1]: HYP mode not available Oct 28 23:14:46.332897 kernel: Initialise system trusted keyrings Oct 28 23:14:46.332905 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 28 23:14:46.332913 kernel: Key type asymmetric registered Oct 28 23:14:46.332921 kernel: Asymmetric key parser 'x509' registered Oct 28 23:14:46.332928 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 28 23:14:46.332936 kernel: io scheduler mq-deadline registered Oct 28 23:14:46.332944 kernel: io scheduler kyber registered Oct 28 23:14:46.332953 kernel: io scheduler bfq registered Oct 28 23:14:46.332961 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 28 23:14:46.332969 kernel: ACPI: button: Power Button [PWRB] Oct 28 23:14:46.332977 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 28 23:14:46.333064 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 28 23:14:46.333074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 28 23:14:46.333082 kernel: thunder_xcv, ver 1.0 Oct 28 23:14:46.333092 kernel: thunder_bgx, ver 1.0 Oct 28 23:14:46.333100 kernel: nicpf, ver 1.0 Oct 28 23:14:46.333108 kernel: nicvf, ver 1.0 Oct 28 23:14:46.333218 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 28 23:14:46.333306 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-28T23:14:45 UTC (1761693285) Oct 28 23:14:46.333316 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 28 23:14:46.333325 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 28 23:14:46.333335 kernel: watchdog: NMI not fully supported Oct 28 23:14:46.333343 kernel: watchdog: Hard watchdog permanently disabled Oct 28 23:14:46.333351 kernel: NET: Registered PF_INET6 protocol family Oct 28 23:14:46.333359 kernel: Segment Routing with IPv6 Oct 28 23:14:46.333367 kernel: In-situ OAM (IOAM) with IPv6 Oct 28 23:14:46.333374 kernel: NET: Registered PF_PACKET protocol family Oct 28 23:14:46.333382 kernel: Key type dns_resolver registered Oct 28 23:14:46.333392 kernel: registered taskstats version 1 Oct 28 23:14:46.333400 kernel: Loading compiled-in X.509 certificates Oct 28 23:14:46.333408 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 6fcb7d180c1be2ee10062a730ec189aabf489c70' Oct 28 23:14:46.333416 kernel: Demotion targets for Node 0: null Oct 28 23:14:46.333424 kernel: Key type .fscrypt registered Oct 28 23:14:46.333432 kernel: Key type fscrypt-provisioning registered Oct 28 23:14:46.333440 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 28 23:14:46.333449 kernel: ima: Allocated hash algorithm: sha1 Oct 28 23:14:46.333457 kernel: ima: No architecture policies found Oct 28 23:14:46.333465 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 28 23:14:46.333472 kernel: clk: Disabling unused clocks Oct 28 23:14:46.333480 kernel: PM: genpd: Disabling unused power domains Oct 28 23:14:46.333488 kernel: Freeing unused kernel memory: 12992K Oct 28 23:14:46.333496 kernel: Run /init as init process Oct 28 23:14:46.333505 kernel: with arguments: Oct 28 23:14:46.333513 kernel: /init Oct 28 23:14:46.333521 kernel: with environment: Oct 28 23:14:46.333529 kernel: HOME=/ Oct 28 23:14:46.333536 kernel: TERM=linux Oct 28 23:14:46.333633 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 28 23:14:46.333727 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 28 23:14:46.333740 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 28 23:14:46.333749 kernel: GPT:16515071 != 27000831 Oct 28 23:14:46.333756 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 28 23:14:46.333764 kernel: GPT:16515071 != 27000831 Oct 28 23:14:46.333771 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 28 23:14:46.333780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 23:14:46.333789 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333797 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333804 kernel: SCSI subsystem initialized Oct 28 23:14:46.333812 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333820 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 28 23:14:46.333828 kernel: device-mapper: uevent: version 1.0.3 Oct 28 23:14:46.333836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 28 23:14:46.333846 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 28 23:14:46.333854 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333861 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333869 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333877 kernel: raid6: neonx8 gen() 15785 MB/s Oct 28 23:14:46.333884 kernel: raid6: neonx4 gen() 15802 MB/s Oct 28 23:14:46.333892 kernel: raid6: neonx2 gen() 13236 MB/s Oct 28 23:14:46.333900 kernel: raid6: neonx1 gen() 10482 MB/s Oct 28 23:14:46.333909 kernel: raid6: int64x8 gen() 6909 MB/s Oct 28 23:14:46.333917 kernel: raid6: int64x4 gen() 7350 MB/s Oct 28 23:14:46.333925 kernel: raid6: int64x2 gen() 6109 MB/s Oct 28 23:14:46.333933 kernel: raid6: int64x1 gen() 5047 MB/s Oct 28 23:14:46.333941 kernel: raid6: using algorithm neonx4 gen() 15802 MB/s Oct 28 23:14:46.333949 kernel: raid6: .... xor() 12366 MB/s, rmw enabled Oct 28 23:14:46.333957 kernel: raid6: using neon recovery algorithm Oct 28 23:14:46.333966 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333973 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333981 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333989 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.333996 kernel: xor: measuring software checksum speed Oct 28 23:14:46.334004 kernel: 8regs : 21601 MB/sec Oct 28 23:14:46.334012 kernel: 32regs : 20913 MB/sec Oct 28 23:14:46.334020 kernel: arm64_neon : 28022 MB/sec Oct 28 23:14:46.334029 kernel: xor: using function: arm64_neon (28022 MB/sec) Oct 28 23:14:46.334037 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.334044 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 28 23:14:46.334053 kernel: BTRFS: device fsid a3ab90fd-8914-4fc1-b889-c46e416b99c2 devid 1 transid 43 /dev/mapper/usr (253:0) scanned by mount (204) Oct 28 23:14:46.334061 kernel: BTRFS info (device dm-0): first mount of filesystem a3ab90fd-8914-4fc1-b889-c46e416b99c2 Oct 28 23:14:46.334069 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:14:46.334076 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 28 23:14:46.334084 kernel: BTRFS info (device dm-0): enabling free space tree Oct 28 23:14:46.334093 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:14:46.334100 kernel: loop: module loaded Oct 28 23:14:46.334108 kernel: loop0: detected capacity change from 0 to 91480 Oct 28 23:14:46.334116 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 28 23:14:46.334125 systemd[1]: Successfully made /usr/ read-only. Oct 28 23:14:46.334135 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 23:14:46.334146 systemd[1]: Detected virtualization kvm. Oct 28 23:14:46.334154 systemd[1]: Detected architecture arm64. Oct 28 23:14:46.334162 systemd[1]: Running in initrd. Oct 28 23:14:46.334188 systemd[1]: No hostname configured, using default hostname. Oct 28 23:14:46.334200 systemd[1]: Hostname set to . Oct 28 23:14:46.334209 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 23:14:46.334220 systemd[1]: Queued start job for default target initrd.target. Oct 28 23:14:46.334228 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 23:14:46.334236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 23:14:46.334245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 23:14:46.334253 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 28 23:14:46.334262 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 23:14:46.334272 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 28 23:14:46.334288 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 28 23:14:46.334297 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 23:14:46.334306 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 23:14:46.334314 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 28 23:14:46.334324 systemd[1]: Reached target paths.target - Path Units. Oct 28 23:14:46.334332 systemd[1]: Reached target slices.target - Slice Units. Oct 28 23:14:46.334340 systemd[1]: Reached target swap.target - Swaps. Oct 28 23:14:46.334349 systemd[1]: Reached target timers.target - Timer Units. Oct 28 23:14:46.334357 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 23:14:46.334365 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 23:14:46.334374 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 28 23:14:46.334384 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 28 23:14:46.334392 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 23:14:46.334400 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 23:14:46.334408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 23:14:46.334417 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 23:14:46.334425 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 28 23:14:46.334435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 28 23:14:46.334444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 23:14:46.334452 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 28 23:14:46.334461 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 28 23:14:46.334470 systemd[1]: Starting systemd-fsck-usr.service... Oct 28 23:14:46.334478 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 23:14:46.334486 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 23:14:46.334497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:14:46.334506 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 28 23:14:46.334515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 23:14:46.334523 systemd[1]: Finished systemd-fsck-usr.service. Oct 28 23:14:46.334533 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 23:14:46.334560 systemd-journald[344]: Collecting audit messages is disabled. Oct 28 23:14:46.334580 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 28 23:14:46.334590 kernel: Bridge firewalling registered Oct 28 23:14:46.334598 systemd-journald[344]: Journal started Oct 28 23:14:46.334616 systemd-journald[344]: Runtime Journal (/run/log/journal/f1c0e61bba554bc19b2a20413066e088) is 6M, max 48.5M, 42.4M free. Oct 28 23:14:46.331951 systemd-modules-load[345]: Inserted module 'br_netfilter' Oct 28 23:14:46.338632 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 23:14:46.342197 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 23:14:46.342735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:14:46.346771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 28 23:14:46.348790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 23:14:46.353361 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 23:14:46.359396 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 23:14:46.362085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 23:14:46.367039 systemd-tmpfiles[364]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 28 23:14:46.372260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 23:14:46.374471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 23:14:46.377903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 23:14:46.379984 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 23:14:46.383074 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 28 23:14:46.385557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 23:14:46.404434 dracut-cmdline[386]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d4a291c245609e5c237181e704ec1c7ec0a6d72eca92291e03117b7440b9f526 Oct 28 23:14:46.428455 systemd-resolved[387]: Positive Trust Anchors: Oct 28 23:14:46.428472 systemd-resolved[387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 23:14:46.428476 systemd-resolved[387]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 23:14:46.428506 systemd-resolved[387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 23:14:46.451135 systemd-resolved[387]: Defaulting to hostname 'linux'. Oct 28 23:14:46.452369 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 23:14:46.453568 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 23:14:46.485199 kernel: Loading iSCSI transport class v2.0-870. Oct 28 23:14:46.493199 kernel: iscsi: registered transport (tcp) Oct 28 23:14:46.506272 kernel: iscsi: registered transport (qla4xxx) Oct 28 23:14:46.506308 kernel: QLogic iSCSI HBA Driver Oct 28 23:14:46.526184 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 23:14:46.549112 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 23:14:46.550747 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 23:14:46.600263 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 28 23:14:46.602825 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 28 23:14:46.604631 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 28 23:14:46.638613 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 28 23:14:46.641281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 23:14:46.671629 systemd-udevd[626]: Using default interface naming scheme 'v257'. Oct 28 23:14:46.679437 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 23:14:46.683874 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 28 23:14:46.705034 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 23:14:46.709457 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 23:14:46.712726 dracut-pre-trigger[701]: rd.md=0: removing MD RAID activation Oct 28 23:14:46.736555 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 23:14:46.739041 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 23:14:46.750859 systemd-networkd[733]: lo: Link UP Oct 28 23:14:46.750867 systemd-networkd[733]: lo: Gained carrier Oct 28 23:14:46.751291 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 23:14:46.752781 systemd[1]: Reached target network.target - Network. Oct 28 23:14:46.798576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 23:14:46.802843 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 28 23:14:46.846973 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 28 23:14:46.856579 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 28 23:14:46.869273 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 23:14:46.875751 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 28 23:14:46.882142 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 28 23:14:46.895968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 23:14:46.896081 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:14:46.899294 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:14:46.899583 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 23:14:46.906212 disk-uuid[800]: Primary Header is updated. Oct 28 23:14:46.906212 disk-uuid[800]: Secondary Entries is updated. Oct 28 23:14:46.906212 disk-uuid[800]: Secondary Header is updated. Oct 28 23:14:46.899587 systemd-networkd[733]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 23:14:46.900477 systemd-networkd[733]: eth0: Link UP Oct 28 23:14:46.900626 systemd-networkd[733]: eth0: Gained carrier Oct 28 23:14:46.900636 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 23:14:46.902666 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:14:46.914228 systemd-networkd[733]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 23:14:46.934105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:14:46.970308 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 28 23:14:46.972067 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 23:14:46.973745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 23:14:46.975972 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 23:14:46.979028 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 28 23:14:47.005449 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 28 23:14:47.229456 systemd-resolved[387]: Detected conflict on linux IN A 10.0.0.60 Oct 28 23:14:47.229471 systemd-resolved[387]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Oct 28 23:14:47.931897 disk-uuid[802]: Warning: The kernel is still using the old partition table. Oct 28 23:14:47.931897 disk-uuid[802]: The new table will be used at the next reboot or after you Oct 28 23:14:47.931897 disk-uuid[802]: run partprobe(8) or kpartx(8) Oct 28 23:14:47.931897 disk-uuid[802]: The operation has completed successfully. Oct 28 23:14:47.937539 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 28 23:14:47.937646 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 28 23:14:47.939984 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 28 23:14:47.968208 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (832) Oct 28 23:14:47.970672 kernel: BTRFS info (device vda6): first mount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:14:47.970713 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:14:47.974536 kernel: BTRFS info (device vda6): turning on async discard Oct 28 23:14:47.974567 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 23:14:47.980188 kernel: BTRFS info (device vda6): last unmount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:14:47.982240 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 28 23:14:47.984264 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 28 23:14:48.084565 ignition[851]: Ignition 2.22.0 Oct 28 23:14:48.084578 ignition[851]: Stage: fetch-offline Oct 28 23:14:48.084621 ignition[851]: no configs at "/usr/lib/ignition/base.d" Oct 28 23:14:48.084632 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:14:48.084787 ignition[851]: parsed url from cmdline: "" Oct 28 23:14:48.084791 ignition[851]: no config URL provided Oct 28 23:14:48.084796 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 23:14:48.084804 ignition[851]: no config at "/usr/lib/ignition/user.ign" Oct 28 23:14:48.084838 ignition[851]: op(1): [started] loading QEMU firmware config module Oct 28 23:14:48.084845 ignition[851]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 28 23:14:48.091487 ignition[851]: op(1): [finished] loading QEMU firmware config module Oct 28 23:14:48.135899 ignition[851]: parsing config with SHA512: 57c57f5256a0ccfbced2be7f22c8c7f5bab5ba9ae5a20821196e2f4508415c00e49982a32b8dc900200f49288b3f9c173a90aa0922f2b1f4a728f812ac8a9465 Oct 28 23:14:48.139976 unknown[851]: fetched base config from "system" Oct 28 23:14:48.139986 unknown[851]: fetched user config from "qemu" Oct 28 23:14:48.140542 ignition[851]: fetch-offline: fetch-offline passed Oct 28 23:14:48.142809 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 23:14:48.140597 ignition[851]: Ignition finished successfully Oct 28 23:14:48.144304 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 28 23:14:48.145087 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 28 23:14:48.190878 ignition[867]: Ignition 2.22.0 Oct 28 23:14:48.190893 ignition[867]: Stage: kargs Oct 28 23:14:48.191038 ignition[867]: no configs at "/usr/lib/ignition/base.d" Oct 28 23:14:48.191047 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:14:48.191787 ignition[867]: kargs: kargs passed Oct 28 23:14:48.191829 ignition[867]: Ignition finished successfully Oct 28 23:14:48.195234 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 28 23:14:48.197610 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 28 23:14:48.241164 ignition[875]: Ignition 2.22.0 Oct 28 23:14:48.241196 ignition[875]: Stage: disks Oct 28 23:14:48.241336 ignition[875]: no configs at "/usr/lib/ignition/base.d" Oct 28 23:14:48.244554 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 28 23:14:48.241345 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:14:48.245981 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 28 23:14:48.242278 ignition[875]: disks: disks passed Oct 28 23:14:48.247928 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 28 23:14:48.242326 ignition[875]: Ignition finished successfully Oct 28 23:14:48.250136 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 23:14:48.252161 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 23:14:48.253765 systemd[1]: Reached target basic.target - Basic System. Oct 28 23:14:48.256671 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 28 23:14:48.298202 systemd-fsck[884]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 28 23:14:48.305468 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 28 23:14:48.307915 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 28 23:14:48.367991 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 28 23:14:48.369566 kernel: EXT4-fs (vda9): mounted filesystem 9b30c517-6c40-4d45-aee4-76eeb6795508 r/w with ordered data mode. Quota mode: none. Oct 28 23:14:48.369287 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 28 23:14:48.371695 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 23:14:48.373290 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 28 23:14:48.374284 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 28 23:14:48.374315 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 28 23:14:48.374342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 23:14:48.392763 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 28 23:14:48.396190 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 28 23:14:48.400211 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Oct 28 23:14:48.400233 kernel: BTRFS info (device vda6): first mount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:14:48.400253 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:14:48.403722 kernel: BTRFS info (device vda6): turning on async discard Oct 28 23:14:48.403750 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 23:14:48.404655 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 23:14:48.449858 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Oct 28 23:14:48.452791 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Oct 28 23:14:48.455736 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Oct 28 23:14:48.459664 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Oct 28 23:14:48.525088 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 28 23:14:48.527244 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 28 23:14:48.528827 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 28 23:14:48.545348 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 28 23:14:48.548199 kernel: BTRFS info (device vda6): last unmount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:14:48.566331 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 28 23:14:48.580762 ignition[1007]: INFO : Ignition 2.22.0 Oct 28 23:14:48.580762 ignition[1007]: INFO : Stage: mount Oct 28 23:14:48.582488 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 23:14:48.582488 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:14:48.582488 ignition[1007]: INFO : mount: mount passed Oct 28 23:14:48.582488 ignition[1007]: INFO : Ignition finished successfully Oct 28 23:14:48.582888 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 28 23:14:48.585644 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 28 23:14:48.857331 systemd-networkd[733]: eth0: Gained IPv6LL Oct 28 23:14:49.369631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 23:14:49.388191 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Oct 28 23:14:49.390463 kernel: BTRFS info (device vda6): first mount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:14:49.390494 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:14:49.393186 kernel: BTRFS info (device vda6): turning on async discard Oct 28 23:14:49.393222 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 23:14:49.394534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 23:14:49.424721 ignition[1036]: INFO : Ignition 2.22.0 Oct 28 23:14:49.424721 ignition[1036]: INFO : Stage: files Oct 28 23:14:49.426652 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 23:14:49.426652 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:14:49.426652 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Oct 28 23:14:49.426652 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 28 23:14:49.426652 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 28 23:14:49.433727 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 28 23:14:49.433727 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 28 23:14:49.433727 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 28 23:14:49.433727 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 28 23:14:49.433727 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 28 23:14:49.431127 unknown[1036]: wrote ssh authorized keys file for user: core Oct 28 23:14:49.525996 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 28 23:14:49.766785 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 28 23:14:49.766785 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 28 23:14:49.771790 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 28 23:14:50.231021 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 28 23:14:51.055966 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 28 23:14:51.055966 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 28 23:14:51.060468 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 23:14:51.062997 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 23:14:51.062997 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 28 23:14:51.062997 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 28 23:14:51.062997 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 23:14:51.062997 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 23:14:51.062997 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 28 23:14:51.062997 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 28 23:14:51.076959 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 23:14:51.082120 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 23:14:51.082120 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 28 23:14:51.082120 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 28 23:14:51.082120 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 28 23:14:51.090990 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 28 23:14:51.090990 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 28 23:14:51.090990 ignition[1036]: INFO : files: files passed Oct 28 23:14:51.090990 ignition[1036]: INFO : Ignition finished successfully Oct 28 23:14:51.085651 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 28 23:14:51.088312 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 28 23:14:51.092309 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 28 23:14:51.112845 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 28 23:14:51.113475 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 28 23:14:51.115953 initrd-setup-root-after-ignition[1067]: grep: /sysroot/oem/oem-release: No such file or directory Oct 28 23:14:51.117654 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 23:14:51.117654 initrd-setup-root-after-ignition[1070]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 28 23:14:51.120852 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 23:14:51.120329 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 23:14:51.122215 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 28 23:14:51.125219 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 28 23:14:51.173878 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 28 23:14:51.174968 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 28 23:14:51.177647 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 28 23:14:51.179617 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 28 23:14:51.180912 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 28 23:14:51.181649 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 28 23:14:51.215954 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 23:14:51.219353 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 28 23:14:51.243717 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 23:14:51.243911 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 28 23:14:51.246158 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 23:14:51.248458 systemd[1]: Stopped target timers.target - Timer Units. Oct 28 23:14:51.250293 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 28 23:14:51.250414 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 23:14:51.253229 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 28 23:14:51.255300 systemd[1]: Stopped target basic.target - Basic System. Oct 28 23:14:51.257205 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 28 23:14:51.259106 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 23:14:51.261272 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 28 23:14:51.263374 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 28 23:14:51.265449 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 28 23:14:51.267425 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 23:14:51.269489 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 28 23:14:51.271484 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 28 23:14:51.273236 systemd[1]: Stopped target swap.target - Swaps. Oct 28 23:14:51.274868 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 28 23:14:51.274997 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 28 23:14:51.277406 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 28 23:14:51.279373 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 23:14:51.281383 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 28 23:14:51.283278 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 23:14:51.284553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 28 23:14:51.284683 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 28 23:14:51.287593 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 28 23:14:51.287763 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 23:14:51.289917 systemd[1]: Stopped target paths.target - Path Units. Oct 28 23:14:51.291882 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 28 23:14:51.292912 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 23:14:51.294348 systemd[1]: Stopped target slices.target - Slice Units. Oct 28 23:14:51.295981 systemd[1]: Stopped target sockets.target - Socket Units. Oct 28 23:14:51.297826 systemd[1]: iscsid.socket: Deactivated successfully. Oct 28 23:14:51.297916 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 23:14:51.300187 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 28 23:14:51.300271 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 23:14:51.302132 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 28 23:14:51.302272 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 23:14:51.304227 systemd[1]: ignition-files.service: Deactivated successfully. Oct 28 23:14:51.304337 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 28 23:14:51.306870 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 28 23:14:51.308563 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 28 23:14:51.308713 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 23:14:51.331510 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 28 23:14:51.332408 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 28 23:14:51.332533 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 23:14:51.334630 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 28 23:14:51.334750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 23:14:51.336765 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 28 23:14:51.336871 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 23:14:51.344687 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 28 23:14:51.345645 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 28 23:14:51.347241 ignition[1094]: INFO : Ignition 2.22.0 Oct 28 23:14:51.347241 ignition[1094]: INFO : Stage: umount Oct 28 23:14:51.349691 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 23:14:51.349691 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:14:51.349691 ignition[1094]: INFO : umount: umount passed Oct 28 23:14:51.349691 ignition[1094]: INFO : Ignition finished successfully Oct 28 23:14:51.349034 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 28 23:14:51.351875 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 28 23:14:51.351968 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 28 23:14:51.353939 systemd[1]: Stopped target network.target - Network. Oct 28 23:14:51.355786 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 28 23:14:51.355843 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 28 23:14:51.357882 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 28 23:14:51.357933 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 28 23:14:51.359942 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 28 23:14:51.359992 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 28 23:14:51.362036 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 28 23:14:51.362086 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 28 23:14:51.364000 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 28 23:14:51.365867 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 28 23:14:51.377357 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 28 23:14:51.377489 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 28 23:14:51.381442 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 28 23:14:51.381557 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 28 23:14:51.386253 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 28 23:14:51.387436 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 28 23:14:51.387475 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 28 23:14:51.390303 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 28 23:14:51.391730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 28 23:14:51.391797 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 23:14:51.395251 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 23:14:51.395298 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 23:14:51.397254 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 28 23:14:51.397296 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 28 23:14:51.399348 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 23:14:51.402803 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 28 23:14:51.406629 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 28 23:14:51.408026 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 28 23:14:51.408107 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 28 23:14:51.414451 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 28 23:14:51.414600 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 23:14:51.417804 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 28 23:14:51.417870 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 28 23:14:51.419224 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 28 23:14:51.419256 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 23:14:51.421191 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 28 23:14:51.421241 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 28 23:14:51.424137 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 28 23:14:51.424205 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 28 23:14:51.427073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 28 23:14:51.427121 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 23:14:51.431739 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 28 23:14:51.433294 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 28 23:14:51.433356 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 23:14:51.435705 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 28 23:14:51.435749 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 23:14:51.438076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 23:14:51.438124 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:14:51.440935 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 28 23:14:51.441033 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 28 23:14:51.442728 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 28 23:14:51.442791 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 28 23:14:51.445139 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 28 23:14:51.447407 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 28 23:14:51.471673 systemd[1]: Switching root. Oct 28 23:14:51.504409 systemd-journald[344]: Journal stopped Oct 28 23:14:52.264977 systemd-journald[344]: Received SIGTERM from PID 1 (systemd). Oct 28 23:14:52.265026 kernel: SELinux: policy capability network_peer_controls=1 Oct 28 23:14:52.265042 kernel: SELinux: policy capability open_perms=1 Oct 28 23:14:52.265053 kernel: SELinux: policy capability extended_socket_class=1 Oct 28 23:14:52.265062 kernel: SELinux: policy capability always_check_network=0 Oct 28 23:14:52.265075 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 28 23:14:52.265084 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 28 23:14:52.265094 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 28 23:14:52.265107 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 28 23:14:52.265116 kernel: SELinux: policy capability userspace_initial_context=0 Oct 28 23:14:52.265126 kernel: audit: type=1403 audit(1761693291.687:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 28 23:14:52.265140 systemd[1]: Successfully loaded SELinux policy in 54.232ms. Oct 28 23:14:52.265158 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.364ms. Oct 28 23:14:52.265187 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 23:14:52.265200 systemd[1]: Detected virtualization kvm. Oct 28 23:14:52.265211 systemd[1]: Detected architecture arm64. Oct 28 23:14:52.265221 systemd[1]: Detected first boot. Oct 28 23:14:52.265231 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 23:14:52.265242 zram_generator::config[1142]: No configuration found. Oct 28 23:14:52.265255 kernel: NET: Registered PF_VSOCK protocol family Oct 28 23:14:52.265267 systemd[1]: Populated /etc with preset unit settings. Oct 28 23:14:52.265278 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 28 23:14:52.265294 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 28 23:14:52.265305 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 28 23:14:52.265317 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 28 23:14:52.265329 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 28 23:14:52.265341 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 28 23:14:52.265351 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 28 23:14:52.265363 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 28 23:14:52.265374 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 28 23:14:52.265384 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 28 23:14:52.265394 systemd[1]: Created slice user.slice - User and Session Slice. Oct 28 23:14:52.265406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 23:14:52.265417 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 23:14:52.265428 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 28 23:14:52.265439 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 28 23:14:52.265449 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 28 23:14:52.265459 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 23:14:52.265470 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 28 23:14:52.265487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 23:14:52.265498 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 23:14:52.265508 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 28 23:14:52.265520 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 28 23:14:52.265538 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 28 23:14:52.265549 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 28 23:14:52.265561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 23:14:52.265572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 23:14:52.265583 systemd[1]: Reached target slices.target - Slice Units. Oct 28 23:14:52.265594 systemd[1]: Reached target swap.target - Swaps. Oct 28 23:14:52.265605 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 28 23:14:52.265618 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 28 23:14:52.265629 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 28 23:14:52.265640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 23:14:52.265657 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 23:14:52.265669 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 23:14:52.265679 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 28 23:14:52.265690 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 28 23:14:52.265701 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 28 23:14:52.265711 systemd[1]: Mounting media.mount - External Media Directory... Oct 28 23:14:52.265722 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 28 23:14:52.265738 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 28 23:14:52.265749 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 28 23:14:52.265760 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 28 23:14:52.265771 systemd[1]: Reached target machines.target - Containers. Oct 28 23:14:52.265782 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 28 23:14:52.265793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:14:52.265803 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 23:14:52.265815 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 28 23:14:52.265825 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 23:14:52.265836 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 23:14:52.265846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 23:14:52.265857 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 28 23:14:52.265868 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 23:14:52.265880 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 28 23:14:52.265891 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 28 23:14:52.265901 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 28 23:14:52.265912 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 28 23:14:52.265922 systemd[1]: Stopped systemd-fsck-usr.service. Oct 28 23:14:52.265934 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:14:52.265944 kernel: fuse: init (API version 7.41) Oct 28 23:14:52.265956 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 23:14:52.265967 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 23:14:52.265977 kernel: ACPI: bus type drm_connector registered Oct 28 23:14:52.265987 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 23:14:52.265998 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 28 23:14:52.266010 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 28 23:14:52.266020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 23:14:52.266050 systemd-journald[1216]: Collecting audit messages is disabled. Oct 28 23:14:52.266072 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 28 23:14:52.266083 systemd-journald[1216]: Journal started Oct 28 23:14:52.266104 systemd-journald[1216]: Runtime Journal (/run/log/journal/f1c0e61bba554bc19b2a20413066e088) is 6M, max 48.5M, 42.4M free. Oct 28 23:14:52.041612 systemd[1]: Queued start job for default target multi-user.target. Oct 28 23:14:52.062001 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 28 23:14:52.062421 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 28 23:14:52.267208 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 28 23:14:52.270093 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 23:14:52.271240 systemd[1]: Mounted media.mount - External Media Directory. Oct 28 23:14:52.272311 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 28 23:14:52.273493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 28 23:14:52.274735 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 28 23:14:52.275962 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 28 23:14:52.277461 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 23:14:52.278912 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 28 23:14:52.279061 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 28 23:14:52.280514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 23:14:52.280694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 23:14:52.282115 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 23:14:52.282346 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 23:14:52.283721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 23:14:52.283879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 23:14:52.285411 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 28 23:14:52.285569 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 28 23:14:52.286917 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 23:14:52.287070 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 23:14:52.288795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 23:14:52.290352 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 23:14:52.293219 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 28 23:14:52.294876 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 28 23:14:52.306898 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 23:14:52.308437 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 28 23:14:52.310721 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 28 23:14:52.312689 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 28 23:14:52.313873 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 28 23:14:52.313899 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 23:14:52.315805 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 28 23:14:52.317460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:14:52.321266 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 28 23:14:52.323235 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 28 23:14:52.324504 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 23:14:52.326790 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 28 23:14:52.328090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 23:14:52.329131 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 23:14:52.336830 systemd-journald[1216]: Time spent on flushing to /var/log/journal/f1c0e61bba554bc19b2a20413066e088 is 15.156ms for 882 entries. Oct 28 23:14:52.336830 systemd-journald[1216]: System Journal (/var/log/journal/f1c0e61bba554bc19b2a20413066e088) is 8M, max 163.5M, 155.5M free. Oct 28 23:14:52.358884 systemd-journald[1216]: Received client request to flush runtime journal. Oct 28 23:14:52.358916 kernel: loop1: detected capacity change from 0 to 200800 Oct 28 23:14:52.333300 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 28 23:14:52.335456 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 28 23:14:52.340673 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 23:14:52.342276 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 28 23:14:52.343742 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 28 23:14:52.345314 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 28 23:14:52.348466 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 28 23:14:52.351440 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 28 23:14:52.363368 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 28 23:14:52.365945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 23:14:52.374322 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 28 23:14:52.380787 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 28 23:14:52.383185 kernel: loop2: detected capacity change from 0 to 119400 Oct 28 23:14:52.383775 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 23:14:52.385809 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 23:14:52.400372 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 28 23:14:52.410934 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Oct 28 23:14:52.410947 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Oct 28 23:14:52.416194 kernel: loop3: detected capacity change from 0 to 100192 Oct 28 23:14:52.420148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 23:14:52.442229 kernel: loop4: detected capacity change from 0 to 200800 Oct 28 23:14:52.444027 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 28 23:14:52.452184 kernel: loop5: detected capacity change from 0 to 119400 Oct 28 23:14:52.461209 kernel: loop6: detected capacity change from 0 to 100192 Oct 28 23:14:52.466040 (sd-merge)[1283]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 28 23:14:52.468869 (sd-merge)[1283]: Merged extensions into '/usr'. Oct 28 23:14:52.472105 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Oct 28 23:14:52.472126 systemd[1]: Reloading... Oct 28 23:14:52.499092 systemd-resolved[1276]: Positive Trust Anchors: Oct 28 23:14:52.499109 systemd-resolved[1276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 23:14:52.499113 systemd-resolved[1276]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 23:14:52.499144 systemd-resolved[1276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 23:14:52.509351 systemd-resolved[1276]: Defaulting to hostname 'linux'. Oct 28 23:14:52.521200 zram_generator::config[1315]: No configuration found. Oct 28 23:14:52.655448 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 28 23:14:52.655560 systemd[1]: Reloading finished in 183 ms. Oct 28 23:14:52.683708 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 23:14:52.686196 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 28 23:14:52.689133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 23:14:52.699307 systemd[1]: Starting ensure-sysext.service... Oct 28 23:14:52.701085 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 23:14:52.713756 systemd[1]: Reload requested from client PID 1350 ('systemctl') (unit ensure-sysext.service)... Oct 28 23:14:52.713776 systemd[1]: Reloading... Oct 28 23:14:52.714574 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 28 23:14:52.714605 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 28 23:14:52.714838 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 28 23:14:52.715006 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 28 23:14:52.715625 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 28 23:14:52.715832 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Oct 28 23:14:52.715879 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Oct 28 23:14:52.719679 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 23:14:52.719690 systemd-tmpfiles[1351]: Skipping /boot Oct 28 23:14:52.725515 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 23:14:52.725530 systemd-tmpfiles[1351]: Skipping /boot Oct 28 23:14:52.759218 zram_generator::config[1384]: No configuration found. Oct 28 23:14:52.880232 systemd[1]: Reloading finished in 166 ms. Oct 28 23:14:52.900635 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 28 23:14:52.915035 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 23:14:52.922368 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 23:14:52.924608 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 28 23:14:52.926835 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 28 23:14:52.930882 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 28 23:14:52.933664 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 23:14:52.936030 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 28 23:14:52.940406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:14:52.942271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 23:14:52.946044 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 23:14:52.953422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 23:14:52.954801 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:14:52.954914 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:14:52.960520 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 28 23:14:52.965287 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 28 23:14:52.966730 systemd-udevd[1422]: Using default interface naming scheme 'v257'. Oct 28 23:14:52.967087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 23:14:52.967307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 23:14:52.971695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 23:14:52.972112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 23:14:52.974206 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 23:14:52.974366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 23:14:52.982891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:14:52.987383 augenrules[1451]: No rules Oct 28 23:14:52.987414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 23:14:52.991149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 23:14:52.993581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 23:14:52.998838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 23:14:53.002973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:14:53.003091 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:14:53.004223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 23:14:53.006785 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 23:14:53.007786 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 23:14:53.009611 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 23:14:53.010148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 23:14:53.012135 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 23:14:53.012388 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 23:14:53.014364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 23:14:53.014521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 23:14:53.021468 systemd[1]: Finished ensure-sysext.service. Oct 28 23:14:53.023984 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 28 23:14:53.030095 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 23:14:53.030503 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 23:14:53.040099 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 23:14:53.041356 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 23:14:53.041422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 23:14:53.048308 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 28 23:14:53.049708 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 23:14:53.093993 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 28 23:14:53.103873 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 28 23:14:53.105586 systemd[1]: Reached target time-set.target - System Time Set. Oct 28 23:14:53.119055 systemd-networkd[1485]: lo: Link UP Oct 28 23:14:53.119069 systemd-networkd[1485]: lo: Gained carrier Oct 28 23:14:53.120245 systemd-networkd[1485]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 23:14:53.120253 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 23:14:53.120769 systemd-networkd[1485]: eth0: Link UP Oct 28 23:14:53.120787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 23:14:53.120892 systemd-networkd[1485]: eth0: Gained carrier Oct 28 23:14:53.120906 systemd-networkd[1485]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 23:14:53.122400 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 23:14:53.125477 systemd[1]: Reached target network.target - Network. Oct 28 23:14:53.127721 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 28 23:14:53.130395 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 28 23:14:53.131589 systemd-networkd[1485]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 23:14:53.132194 systemd-timesyncd[1486]: Network configuration changed, trying to establish connection. Oct 28 23:14:53.132636 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 28 23:14:52.705464 systemd-timesyncd[1486]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 28 23:14:52.711846 systemd-journald[1216]: Time jumped backwards, rotating. Oct 28 23:14:52.707465 systemd-resolved[1276]: Clock change detected. Flushing caches. Oct 28 23:14:52.708391 systemd-timesyncd[1486]: Initial clock synchronization to Tue 2025-10-28 23:14:52.705363 UTC. Oct 28 23:14:52.722872 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 28 23:14:52.727744 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 28 23:14:52.792714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:14:52.813824 ldconfig[1419]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 28 23:14:52.818787 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 28 23:14:52.823666 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 28 23:14:52.841278 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 28 23:14:52.848669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:14:52.851189 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 23:14:52.852382 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 28 23:14:52.853696 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 28 23:14:52.855075 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 28 23:14:52.856249 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 28 23:14:52.857566 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 28 23:14:52.858886 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 28 23:14:52.858925 systemd[1]: Reached target paths.target - Path Units. Oct 28 23:14:52.859847 systemd[1]: Reached target timers.target - Timer Units. Oct 28 23:14:52.861509 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 28 23:14:52.863713 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 28 23:14:52.866394 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 28 23:14:52.867783 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 28 23:14:52.869057 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 28 23:14:52.872128 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 28 23:14:52.873480 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 28 23:14:52.875110 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 28 23:14:52.876288 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 23:14:52.877329 systemd[1]: Reached target basic.target - Basic System. Oct 28 23:14:52.878341 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 28 23:14:52.878372 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 28 23:14:52.879199 systemd[1]: Starting containerd.service - containerd container runtime... Oct 28 23:14:52.881131 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 28 23:14:52.883574 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 28 23:14:52.885607 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 28 23:14:52.887543 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 28 23:14:52.888535 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 28 23:14:52.889457 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 28 23:14:52.892179 jq[1535]: false Oct 28 23:14:52.893521 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 28 23:14:52.895362 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 28 23:14:52.897656 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 28 23:14:52.900357 extend-filesystems[1536]: Found /dev/vda6 Oct 28 23:14:52.901051 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 28 23:14:52.903579 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 28 23:14:52.903971 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 28 23:14:52.905536 systemd[1]: Starting update-engine.service - Update Engine... Oct 28 23:14:52.906616 extend-filesystems[1536]: Found /dev/vda9 Oct 28 23:14:52.908395 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 28 23:14:52.908834 extend-filesystems[1536]: Checking size of /dev/vda9 Oct 28 23:14:52.913443 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 28 23:14:52.916479 jq[1555]: true Oct 28 23:14:52.916565 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 28 23:14:52.916740 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 28 23:14:52.917308 systemd[1]: motdgen.service: Deactivated successfully. Oct 28 23:14:52.917523 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 28 23:14:52.919942 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 28 23:14:52.920553 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 28 23:14:52.934920 extend-filesystems[1536]: Resized partition /dev/vda9 Oct 28 23:14:52.936525 tar[1563]: linux-arm64/LICENSE Oct 28 23:14:52.936525 tar[1563]: linux-arm64/helm Oct 28 23:14:52.940250 update_engine[1552]: I20251028 23:14:52.939690 1552 main.cc:92] Flatcar Update Engine starting Oct 28 23:14:52.940634 extend-filesystems[1580]: resize2fs 1.47.3 (8-Jul-2025) Oct 28 23:14:52.944726 jq[1566]: true Oct 28 23:14:52.949507 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 28 23:14:52.972382 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (Power Button) Oct 28 23:14:52.977044 systemd-logind[1547]: New seat seat0. Oct 28 23:14:52.977857 systemd[1]: Started systemd-logind.service - User Login Management. Oct 28 23:14:52.985391 dbus-daemon[1533]: [system] SELinux support is enabled Oct 28 23:14:52.985580 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 28 23:14:52.988861 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 28 23:14:52.988904 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 28 23:14:52.990687 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 28 23:14:52.990712 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 28 23:14:52.992720 dbus-daemon[1533]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 28 23:14:52.994821 systemd[1]: Started update-engine.service - Update Engine. Oct 28 23:14:52.997111 update_engine[1552]: I20251028 23:14:52.995208 1552 update_check_scheduler.cc:74] Next update check in 8m32s Oct 28 23:14:53.001085 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 28 23:14:53.008444 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 28 23:14:53.028318 extend-filesystems[1580]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 28 23:14:53.028318 extend-filesystems[1580]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 28 23:14:53.028318 extend-filesystems[1580]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 28 23:14:53.037012 extend-filesystems[1536]: Resized filesystem in /dev/vda9 Oct 28 23:14:53.040075 bash[1599]: Updated "/home/core/.ssh/authorized_keys" Oct 28 23:14:53.028713 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 28 23:14:53.031471 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 28 23:14:53.037129 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 28 23:14:53.042358 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 28 23:14:53.073985 locksmithd[1595]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 28 23:14:53.115467 containerd[1571]: time="2025-10-28T23:14:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 28 23:14:53.118492 containerd[1571]: time="2025-10-28T23:14:53.118459454Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 28 23:14:53.133444 containerd[1571]: time="2025-10-28T23:14:53.133388174Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.48µs" Oct 28 23:14:53.133444 containerd[1571]: time="2025-10-28T23:14:53.133447734Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 28 23:14:53.133538 containerd[1571]: time="2025-10-28T23:14:53.133472134Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 28 23:14:53.133642 containerd[1571]: time="2025-10-28T23:14:53.133616494Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 28 23:14:53.133666 containerd[1571]: time="2025-10-28T23:14:53.133641414Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 28 23:14:53.133683 containerd[1571]: time="2025-10-28T23:14:53.133668694Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 23:14:53.133740 containerd[1571]: time="2025-10-28T23:14:53.133719334Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 23:14:53.133784 containerd[1571]: time="2025-10-28T23:14:53.133738294Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 23:14:53.134221 containerd[1571]: time="2025-10-28T23:14:53.134192294Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 23:14:53.134221 containerd[1571]: time="2025-10-28T23:14:53.134216734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 23:14:53.134263 containerd[1571]: time="2025-10-28T23:14:53.134229974Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 23:14:53.134263 containerd[1571]: time="2025-10-28T23:14:53.134237814Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 28 23:14:53.134332 containerd[1571]: time="2025-10-28T23:14:53.134312974Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 28 23:14:53.134537 containerd[1571]: time="2025-10-28T23:14:53.134515374Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 23:14:53.134573 containerd[1571]: time="2025-10-28T23:14:53.134547134Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 23:14:53.134573 containerd[1571]: time="2025-10-28T23:14:53.134557374Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 28 23:14:53.134605 containerd[1571]: time="2025-10-28T23:14:53.134586774Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 28 23:14:53.134786 containerd[1571]: time="2025-10-28T23:14:53.134768814Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 28 23:14:53.134848 containerd[1571]: time="2025-10-28T23:14:53.134832214Z" level=info msg="metadata content store policy set" policy=shared Oct 28 23:14:53.141635 containerd[1571]: time="2025-10-28T23:14:53.141595734Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 28 23:14:53.141686 containerd[1571]: time="2025-10-28T23:14:53.141657214Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 28 23:14:53.141686 containerd[1571]: time="2025-10-28T23:14:53.141671374Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 28 23:14:53.141686 containerd[1571]: time="2025-10-28T23:14:53.141683174Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 28 23:14:53.141752 containerd[1571]: time="2025-10-28T23:14:53.141694014Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 28 23:14:53.141752 containerd[1571]: time="2025-10-28T23:14:53.141704294Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 28 23:14:53.141752 containerd[1571]: time="2025-10-28T23:14:53.141718174Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 28 23:14:53.141752 containerd[1571]: time="2025-10-28T23:14:53.141729654Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 28 23:14:53.141752 containerd[1571]: time="2025-10-28T23:14:53.141738774Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 28 23:14:53.141752 containerd[1571]: time="2025-10-28T23:14:53.141749294Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 28 23:14:53.141846 containerd[1571]: time="2025-10-28T23:14:53.141758414Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 28 23:14:53.141846 containerd[1571]: time="2025-10-28T23:14:53.141776494Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 28 23:14:53.141917 containerd[1571]: time="2025-10-28T23:14:53.141886934Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 28 23:14:53.141942 containerd[1571]: time="2025-10-28T23:14:53.141922614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 28 23:14:53.141942 containerd[1571]: time="2025-10-28T23:14:53.141939614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 28 23:14:53.141979 containerd[1571]: time="2025-10-28T23:14:53.141953414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 28 23:14:53.141979 containerd[1571]: time="2025-10-28T23:14:53.141964214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 28 23:14:53.141979 containerd[1571]: time="2025-10-28T23:14:53.141974534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 28 23:14:53.142047 containerd[1571]: time="2025-10-28T23:14:53.141984894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 28 23:14:53.142047 containerd[1571]: time="2025-10-28T23:14:53.141994374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 28 23:14:53.142047 containerd[1571]: time="2025-10-28T23:14:53.142005054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 28 23:14:53.142047 containerd[1571]: time="2025-10-28T23:14:53.142015454Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 28 23:14:53.142047 containerd[1571]: time="2025-10-28T23:14:53.142033574Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 28 23:14:53.142288 containerd[1571]: time="2025-10-28T23:14:53.142269654Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 28 23:14:53.142314 containerd[1571]: time="2025-10-28T23:14:53.142289974Z" level=info msg="Start snapshots syncer" Oct 28 23:14:53.142333 containerd[1571]: time="2025-10-28T23:14:53.142314054Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 28 23:14:53.144748 containerd[1571]: time="2025-10-28T23:14:53.142526974Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 28 23:14:53.144748 containerd[1571]: time="2025-10-28T23:14:53.142575294Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142629694Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142723854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142744614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142755654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142765254Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142777214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142786894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142796694Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142818254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142828454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142838854Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142866734Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142880134Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 23:14:53.144864 containerd[1571]: time="2025-10-28T23:14:53.142888294Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.142906734Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.142916854Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.142926094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.142935974Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.143010414Z" level=info msg="runtime interface created" Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.143015214Z" level=info msg="created NRI interface" Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.143023094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.143033014Z" level=info msg="Connect containerd service" Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.143058334Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 28 23:14:53.145099 containerd[1571]: time="2025-10-28T23:14:53.143691134Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 23:14:53.210727 containerd[1571]: time="2025-10-28T23:14:53.210660654Z" level=info msg="Start subscribing containerd event" Oct 28 23:14:53.210727 containerd[1571]: time="2025-10-28T23:14:53.210737654Z" level=info msg="Start recovering state" Oct 28 23:14:53.210847 containerd[1571]: time="2025-10-28T23:14:53.210830014Z" level=info msg="Start event monitor" Oct 28 23:14:53.210938 containerd[1571]: time="2025-10-28T23:14:53.210848934Z" level=info msg="Start cni network conf syncer for default" Oct 28 23:14:53.210938 containerd[1571]: time="2025-10-28T23:14:53.210862214Z" level=info msg="Start streaming server" Oct 28 23:14:53.211041 containerd[1571]: time="2025-10-28T23:14:53.211021734Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 28 23:14:53.211041 containerd[1571]: time="2025-10-28T23:14:53.211037014Z" level=info msg="runtime interface starting up..." Oct 28 23:14:53.211085 containerd[1571]: time="2025-10-28T23:14:53.211044814Z" level=info msg="starting plugins..." Oct 28 23:14:53.211085 containerd[1571]: time="2025-10-28T23:14:53.211062214Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 28 23:14:53.211304 containerd[1571]: time="2025-10-28T23:14:53.211080694Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 28 23:14:53.211372 containerd[1571]: time="2025-10-28T23:14:53.211336454Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 28 23:14:53.211454 containerd[1571]: time="2025-10-28T23:14:53.211437894Z" level=info msg="containerd successfully booted in 0.097475s" Oct 28 23:14:53.211545 systemd[1]: Started containerd.service - containerd container runtime. Oct 28 23:14:53.294880 tar[1563]: linux-arm64/README.md Oct 28 23:14:53.318493 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 28 23:14:53.631401 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 28 23:14:53.650545 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 28 23:14:53.653208 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 28 23:14:53.676468 systemd[1]: issuegen.service: Deactivated successfully. Oct 28 23:14:53.676666 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 28 23:14:53.680149 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 28 23:14:53.700691 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 28 23:14:53.704130 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 28 23:14:53.706990 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 28 23:14:53.708993 systemd[1]: Reached target getty.target - Login Prompts. Oct 28 23:14:54.316589 systemd-networkd[1485]: eth0: Gained IPv6LL Oct 28 23:14:54.321942 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 28 23:14:54.323846 systemd[1]: Reached target network-online.target - Network is Online. Oct 28 23:14:54.326243 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 28 23:14:54.328582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:14:54.337028 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 28 23:14:54.354631 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 28 23:14:54.356774 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 28 23:14:54.356973 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 28 23:14:54.359058 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 28 23:14:54.845496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:14:54.847284 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 28 23:14:54.848601 systemd[1]: Startup finished in 1.191s (kernel) + 5.582s (initrd) + 3.645s (userspace) = 10.419s. Oct 28 23:14:54.849855 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 23:14:55.155231 kubelet[1670]: E1028 23:14:55.155116 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 23:14:55.157388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 23:14:55.157536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 23:14:55.158542 systemd[1]: kubelet.service: Consumed 679ms CPU time, 248.3M memory peak. Oct 28 23:14:56.850947 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 28 23:14:56.851980 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:41298.service - OpenSSH per-connection server daemon (10.0.0.1:41298). Oct 28 23:14:56.934843 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 41298 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:14:56.936351 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:14:56.945114 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 28 23:14:56.946012 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 28 23:14:56.948358 systemd-logind[1547]: New session 1 of user core. Oct 28 23:14:56.975462 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 28 23:14:56.977287 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 28 23:14:56.995855 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:14:56.998047 systemd-logind[1547]: New session 2 of user core. Oct 28 23:14:57.087882 systemd[1689]: Queued start job for default target default.target. Oct 28 23:14:57.108258 systemd[1689]: Created slice app.slice - User Application Slice. Oct 28 23:14:57.108287 systemd[1689]: Reached target paths.target - Paths. Oct 28 23:14:57.108322 systemd[1689]: Reached target timers.target - Timers. Oct 28 23:14:57.109410 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 28 23:14:57.118111 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 28 23:14:57.118172 systemd[1689]: Reached target sockets.target - Sockets. Oct 28 23:14:57.118207 systemd[1689]: Reached target basic.target - Basic System. Oct 28 23:14:57.118241 systemd[1689]: Reached target default.target - Main User Target. Oct 28 23:14:57.118265 systemd[1689]: Startup finished in 115ms. Oct 28 23:14:57.118374 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 28 23:14:57.119588 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 28 23:14:57.129362 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:41312.service - OpenSSH per-connection server daemon (10.0.0.1:41312). Oct 28 23:14:57.172067 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 41312 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:14:57.173222 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:14:57.177255 systemd-logind[1547]: New session 3 of user core. Oct 28 23:14:57.198580 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 28 23:14:57.208733 sshd[1705]: Connection closed by 10.0.0.1 port 41312 Oct 28 23:14:57.208211 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Oct 28 23:14:57.211900 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:41312.service: Deactivated successfully. Oct 28 23:14:57.214621 systemd[1]: session-3.scope: Deactivated successfully. Oct 28 23:14:57.215724 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Oct 28 23:14:57.217378 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:41314.service - OpenSSH per-connection server daemon (10.0.0.1:41314). Oct 28 23:14:57.218043 systemd-logind[1547]: Removed session 3. Oct 28 23:14:57.280796 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 41314 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:14:57.281888 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:14:57.285343 systemd-logind[1547]: New session 4 of user core. Oct 28 23:14:57.300563 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 28 23:14:57.308077 sshd[1715]: Connection closed by 10.0.0.1 port 41314 Oct 28 23:14:57.307962 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Oct 28 23:14:57.327286 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:41314.service: Deactivated successfully. Oct 28 23:14:57.329746 systemd[1]: session-4.scope: Deactivated successfully. Oct 28 23:14:57.330457 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Oct 28 23:14:57.332702 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:41326.service - OpenSSH per-connection server daemon (10.0.0.1:41326). Oct 28 23:14:57.333343 systemd-logind[1547]: Removed session 4. Oct 28 23:14:57.383921 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 41326 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:14:57.385214 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:14:57.388780 systemd-logind[1547]: New session 5 of user core. Oct 28 23:14:57.396560 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 28 23:14:57.406484 sshd[1727]: Connection closed by 10.0.0.1 port 41326 Oct 28 23:14:57.406803 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Oct 28 23:14:57.417258 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:41326.service: Deactivated successfully. Oct 28 23:14:57.418683 systemd[1]: session-5.scope: Deactivated successfully. Oct 28 23:14:57.420150 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Oct 28 23:14:57.421328 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:41342.service - OpenSSH per-connection server daemon (10.0.0.1:41342). Oct 28 23:14:57.422243 systemd-logind[1547]: Removed session 5. Oct 28 23:14:57.464281 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 41342 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:14:57.465331 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:14:57.469563 systemd-logind[1547]: New session 6 of user core. Oct 28 23:14:57.484568 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 28 23:14:57.500517 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 28 23:14:57.500750 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:14:57.520269 sudo[1738]: pam_unix(sudo:session): session closed for user root Oct 28 23:14:57.521607 sshd[1737]: Connection closed by 10.0.0.1 port 41342 Oct 28 23:14:57.522022 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Oct 28 23:14:57.534422 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:41342.service: Deactivated successfully. Oct 28 23:14:57.535885 systemd[1]: session-6.scope: Deactivated successfully. Oct 28 23:14:57.537624 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Oct 28 23:14:57.539967 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:41350.service - OpenSSH per-connection server daemon (10.0.0.1:41350). Oct 28 23:14:57.540592 systemd-logind[1547]: Removed session 6. Oct 28 23:14:57.592142 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 41350 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:14:57.593366 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:14:57.597770 systemd-logind[1547]: New session 7 of user core. Oct 28 23:14:57.610580 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 28 23:14:57.622064 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 28 23:14:57.622574 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:14:57.626128 sudo[1751]: pam_unix(sudo:session): session closed for user root Oct 28 23:14:57.631762 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 28 23:14:57.631992 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:14:57.639352 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 23:14:57.674064 augenrules[1775]: No rules Oct 28 23:14:57.675161 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 23:14:57.675360 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 23:14:57.676543 sudo[1750]: pam_unix(sudo:session): session closed for user root Oct 28 23:14:57.677533 sshd[1749]: Connection closed by 10.0.0.1 port 41350 Oct 28 23:14:57.678576 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Oct 28 23:14:57.688310 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:41350.service: Deactivated successfully. Oct 28 23:14:57.690834 systemd[1]: session-7.scope: Deactivated successfully. Oct 28 23:14:57.693408 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Oct 28 23:14:57.695170 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:41352.service - OpenSSH per-connection server daemon (10.0.0.1:41352). Oct 28 23:14:57.695764 systemd-logind[1547]: Removed session 7. Oct 28 23:14:57.748328 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 41352 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:14:57.749606 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:14:57.753987 systemd-logind[1547]: New session 8 of user core. Oct 28 23:14:57.768587 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 28 23:14:57.779674 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 28 23:14:57.779903 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:14:58.043913 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 28 23:14:58.059697 (dockerd)[1810]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 28 23:14:58.248931 dockerd[1810]: time="2025-10-28T23:14:58.248859814Z" level=info msg="Starting up" Oct 28 23:14:58.250163 dockerd[1810]: time="2025-10-28T23:14:58.250125294Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 28 23:14:58.259889 dockerd[1810]: time="2025-10-28T23:14:58.259840934Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 28 23:14:58.449507 dockerd[1810]: time="2025-10-28T23:14:58.449344734Z" level=info msg="Loading containers: start." Oct 28 23:14:58.457467 kernel: Initializing XFRM netlink socket Oct 28 23:14:58.634752 systemd-networkd[1485]: docker0: Link UP Oct 28 23:14:58.637969 dockerd[1810]: time="2025-10-28T23:14:58.637871454Z" level=info msg="Loading containers: done." Oct 28 23:14:58.648818 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4007326395-merged.mount: Deactivated successfully. Oct 28 23:14:58.651259 dockerd[1810]: time="2025-10-28T23:14:58.651212294Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 28 23:14:58.651329 dockerd[1810]: time="2025-10-28T23:14:58.651285614Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 28 23:14:58.651460 dockerd[1810]: time="2025-10-28T23:14:58.651441694Z" level=info msg="Initializing buildkit" Oct 28 23:14:58.671272 dockerd[1810]: time="2025-10-28T23:14:58.671231374Z" level=info msg="Completed buildkit initialization" Oct 28 23:14:58.675752 dockerd[1810]: time="2025-10-28T23:14:58.675728094Z" level=info msg="Daemon has completed initialization" Oct 28 23:14:58.676020 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 28 23:14:58.676416 dockerd[1810]: time="2025-10-28T23:14:58.675816534Z" level=info msg="API listen on /run/docker.sock" Oct 28 23:14:59.107590 containerd[1571]: time="2025-10-28T23:14:59.107554094Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 28 23:14:59.932824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860964823.mount: Deactivated successfully. Oct 28 23:15:00.927891 containerd[1571]: time="2025-10-28T23:15:00.927833294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:00.928787 containerd[1571]: time="2025-10-28T23:15:00.928757334Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Oct 28 23:15:00.929809 containerd[1571]: time="2025-10-28T23:15:00.929389694Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:00.932385 containerd[1571]: time="2025-10-28T23:15:00.932346134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:00.933366 containerd[1571]: time="2025-10-28T23:15:00.933334094Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.8257424s" Oct 28 23:15:00.933475 containerd[1571]: time="2025-10-28T23:15:00.933459574Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 28 23:15:00.934171 containerd[1571]: time="2025-10-28T23:15:00.934146454Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 28 23:15:01.955517 containerd[1571]: time="2025-10-28T23:15:01.955473294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:01.956725 containerd[1571]: time="2025-10-28T23:15:01.956698694Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Oct 28 23:15:01.957690 containerd[1571]: time="2025-10-28T23:15:01.957647334Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:01.960303 containerd[1571]: time="2025-10-28T23:15:01.960274014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:01.961280 containerd[1571]: time="2025-10-28T23:15:01.961135214Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.02692104s" Oct 28 23:15:01.961280 containerd[1571]: time="2025-10-28T23:15:01.961165374Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 28 23:15:01.961531 containerd[1571]: time="2025-10-28T23:15:01.961501854Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 28 23:15:02.799282 containerd[1571]: time="2025-10-28T23:15:02.799230454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:02.799876 containerd[1571]: time="2025-10-28T23:15:02.799846414Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Oct 28 23:15:02.800709 containerd[1571]: time="2025-10-28T23:15:02.800683374Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:02.803039 containerd[1571]: time="2025-10-28T23:15:02.803010854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:02.803918 containerd[1571]: time="2025-10-28T23:15:02.803883974Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 842.35404ms" Oct 28 23:15:02.803918 containerd[1571]: time="2025-10-28T23:15:02.803915614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 28 23:15:02.804761 containerd[1571]: time="2025-10-28T23:15:02.804738294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 28 23:15:03.861369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2072987558.mount: Deactivated successfully. Oct 28 23:15:04.016899 containerd[1571]: time="2025-10-28T23:15:04.016843414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:04.017359 containerd[1571]: time="2025-10-28T23:15:04.017325214Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Oct 28 23:15:04.018396 containerd[1571]: time="2025-10-28T23:15:04.018374054Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:04.020061 containerd[1571]: time="2025-10-28T23:15:04.020011374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:04.020951 containerd[1571]: time="2025-10-28T23:15:04.020828174Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.2160506s" Oct 28 23:15:04.020951 containerd[1571]: time="2025-10-28T23:15:04.020860414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 28 23:15:04.021333 containerd[1571]: time="2025-10-28T23:15:04.021310494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 28 23:15:04.493988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233760971.mount: Deactivated successfully. Oct 28 23:15:05.408102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 28 23:15:05.409810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:15:05.533573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:15:05.536912 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 23:15:05.753378 containerd[1571]: time="2025-10-28T23:15:05.753255814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:05.754342 containerd[1571]: time="2025-10-28T23:15:05.754300494Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Oct 28 23:15:05.757241 containerd[1571]: time="2025-10-28T23:15:05.756450494Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:05.759757 containerd[1571]: time="2025-10-28T23:15:05.759715654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:05.762153 containerd[1571]: time="2025-10-28T23:15:05.762118374Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.74077612s" Oct 28 23:15:05.762153 containerd[1571]: time="2025-10-28T23:15:05.762152574Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 28 23:15:05.762702 containerd[1571]: time="2025-10-28T23:15:05.762665094Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 28 23:15:05.767672 kubelet[2167]: E1028 23:15:05.767630 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 23:15:05.770405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 23:15:05.770545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 23:15:05.772824 systemd[1]: kubelet.service: Consumed 137ms CPU time, 107.4M memory peak. Oct 28 23:15:06.216145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941150539.mount: Deactivated successfully. Oct 28 23:15:06.221142 containerd[1571]: time="2025-10-28T23:15:06.221090534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:06.221821 containerd[1571]: time="2025-10-28T23:15:06.221787094Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Oct 28 23:15:06.222551 containerd[1571]: time="2025-10-28T23:15:06.222513534Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:06.224319 containerd[1571]: time="2025-10-28T23:15:06.224285414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:06.224997 containerd[1571]: time="2025-10-28T23:15:06.224960534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 462.2674ms" Oct 28 23:15:06.225031 containerd[1571]: time="2025-10-28T23:15:06.224995534Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 28 23:15:06.225405 containerd[1571]: time="2025-10-28T23:15:06.225384294Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 28 23:15:09.642829 containerd[1571]: time="2025-10-28T23:15:09.642759214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:09.643250 containerd[1571]: time="2025-10-28T23:15:09.643227014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Oct 28 23:15:09.644235 containerd[1571]: time="2025-10-28T23:15:09.644210094Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:09.646692 containerd[1571]: time="2025-10-28T23:15:09.646665214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:09.648446 containerd[1571]: time="2025-10-28T23:15:09.648398014Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.42298696s" Oct 28 23:15:09.648614 containerd[1571]: time="2025-10-28T23:15:09.648528014Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 28 23:15:15.304061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:15:15.304562 systemd[1]: kubelet.service: Consumed 137ms CPU time, 107.4M memory peak. Oct 28 23:15:15.306331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:15:15.326510 systemd[1]: Reload requested from client PID 2251 ('systemctl') (unit session-8.scope)... Oct 28 23:15:15.326524 systemd[1]: Reloading... Oct 28 23:15:15.403467 zram_generator::config[2299]: No configuration found. Oct 28 23:15:15.598298 systemd[1]: Reloading finished in 271 ms. Oct 28 23:15:15.635800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:15:15.637843 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:15:15.640328 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 23:15:15.640559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:15:15.640598 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.2M memory peak. Oct 28 23:15:15.641824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:15:15.784313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:15:15.789233 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 23:15:15.820180 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 23:15:15.820180 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 23:15:15.820756 kubelet[2343]: I1028 23:15:15.820691 2343 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 23:15:16.380482 kubelet[2343]: I1028 23:15:16.380419 2343 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 23:15:16.380482 kubelet[2343]: I1028 23:15:16.380467 2343 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 23:15:16.381552 kubelet[2343]: I1028 23:15:16.381523 2343 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 23:15:16.381552 kubelet[2343]: I1028 23:15:16.381543 2343 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 23:15:16.381803 kubelet[2343]: I1028 23:15:16.381774 2343 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 23:15:16.475196 kubelet[2343]: E1028 23:15:16.475158 2343 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 28 23:15:16.476444 kubelet[2343]: I1028 23:15:16.476427 2343 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 23:15:16.481195 kubelet[2343]: I1028 23:15:16.481129 2343 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 23:15:16.483791 kubelet[2343]: I1028 23:15:16.483764 2343 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 23:15:16.483998 kubelet[2343]: I1028 23:15:16.483973 2343 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 23:15:16.484131 kubelet[2343]: I1028 23:15:16.483997 2343 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 23:15:16.484210 kubelet[2343]: I1028 23:15:16.484133 2343 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 23:15:16.484210 kubelet[2343]: I1028 23:15:16.484142 2343 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 23:15:16.484247 kubelet[2343]: I1028 23:15:16.484231 2343 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 23:15:16.486664 kubelet[2343]: I1028 23:15:16.486647 2343 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:15:16.487717 kubelet[2343]: I1028 23:15:16.487692 2343 kubelet.go:475] "Attempting to sync node with API server" Oct 28 23:15:16.487717 kubelet[2343]: I1028 23:15:16.487714 2343 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 23:15:16.488469 kubelet[2343]: I1028 23:15:16.488124 2343 kubelet.go:387] "Adding apiserver pod source" Oct 28 23:15:16.488469 kubelet[2343]: I1028 23:15:16.488149 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 23:15:16.488469 kubelet[2343]: E1028 23:15:16.488219 2343 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 23:15:16.488945 kubelet[2343]: E1028 23:15:16.488917 2343 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 23:15:16.489111 kubelet[2343]: I1028 23:15:16.489083 2343 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 23:15:16.489728 kubelet[2343]: I1028 23:15:16.489701 2343 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 23:15:16.489777 kubelet[2343]: I1028 23:15:16.489736 2343 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 23:15:16.489813 kubelet[2343]: W1028 23:15:16.489796 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 28 23:15:16.492343 kubelet[2343]: I1028 23:15:16.492230 2343 server.go:1262] "Started kubelet" Oct 28 23:15:16.492630 kubelet[2343]: I1028 23:15:16.492594 2343 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 23:15:16.492676 kubelet[2343]: I1028 23:15:16.492644 2343 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 23:15:16.493640 kubelet[2343]: I1028 23:15:16.493215 2343 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 23:15:16.493640 kubelet[2343]: I1028 23:15:16.493218 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 23:15:16.495122 kubelet[2343]: I1028 23:15:16.495088 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 23:15:16.497445 kubelet[2343]: I1028 23:15:16.495889 2343 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 23:15:16.497445 kubelet[2343]: I1028 23:15:16.496024 2343 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 23:15:16.497445 kubelet[2343]: E1028 23:15:16.496762 2343 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:15:16.497445 kubelet[2343]: I1028 23:15:16.496788 2343 server.go:310] "Adding debug handlers to kubelet server" Oct 28 23:15:16.497445 kubelet[2343]: E1028 23:15:16.496839 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" Oct 28 23:15:16.497445 kubelet[2343]: E1028 23:15:16.497240 2343 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 23:15:16.497601 kubelet[2343]: I1028 23:15:16.497580 2343 factory.go:223] Registration of the systemd container factory successfully Oct 28 23:15:16.497679 kubelet[2343]: I1028 23:15:16.497651 2343 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 23:15:16.498855 kubelet[2343]: I1028 23:15:16.498827 2343 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 23:15:16.498902 kubelet[2343]: I1028 23:15:16.498876 2343 reconciler.go:29] "Reconciler: start to sync state" Oct 28 23:15:16.499168 kubelet[2343]: E1028 23:15:16.496066 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872cabcccb27106 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 23:15:16.492194054 +0000 UTC m=+0.700100641,LastTimestamp:2025-10-28 23:15:16.492194054 +0000 UTC m=+0.700100641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 23:15:16.499343 kubelet[2343]: I1028 23:15:16.499322 2343 factory.go:223] Registration of the containerd container factory successfully Oct 28 23:15:16.499898 kubelet[2343]: E1028 23:15:16.499874 2343 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 23:15:16.514873 kubelet[2343]: I1028 23:15:16.514845 2343 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 23:15:16.514873 kubelet[2343]: I1028 23:15:16.514864 2343 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 23:15:16.514873 kubelet[2343]: I1028 23:15:16.514879 2343 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:15:16.516382 kubelet[2343]: I1028 23:15:16.516339 2343 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 23:15:16.516850 kubelet[2343]: I1028 23:15:16.516829 2343 policy_none.go:49] "None policy: Start" Oct 28 23:15:16.516850 kubelet[2343]: I1028 23:15:16.516852 2343 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 23:15:16.516928 kubelet[2343]: I1028 23:15:16.516863 2343 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 23:15:16.517290 kubelet[2343]: I1028 23:15:16.517269 2343 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 23:15:16.517325 kubelet[2343]: I1028 23:15:16.517303 2343 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 23:15:16.517347 kubelet[2343]: I1028 23:15:16.517332 2343 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 23:15:16.517685 kubelet[2343]: E1028 23:15:16.517646 2343 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 23:15:16.517886 kubelet[2343]: I1028 23:15:16.517865 2343 policy_none.go:47] "Start" Oct 28 23:15:16.519130 kubelet[2343]: E1028 23:15:16.519076 2343 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 23:15:16.522259 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 28 23:15:16.538905 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 28 23:15:16.541764 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 28 23:15:16.559302 kubelet[2343]: E1028 23:15:16.559279 2343 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 23:15:16.559533 kubelet[2343]: I1028 23:15:16.559498 2343 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 23:15:16.559579 kubelet[2343]: I1028 23:15:16.559512 2343 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 23:15:16.559922 kubelet[2343]: I1028 23:15:16.559818 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 23:15:16.560857 kubelet[2343]: E1028 23:15:16.560836 2343 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 23:15:16.561009 kubelet[2343]: E1028 23:15:16.560875 2343 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 28 23:15:16.627141 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 28 23:15:16.650784 kubelet[2343]: E1028 23:15:16.650580 2343 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:15:16.653876 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 28 23:15:16.655371 kubelet[2343]: E1028 23:15:16.655322 2343 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:15:16.657346 systemd[1]: Created slice kubepods-burstable-pode6585b09578f24f4c3d281deeed62e90.slice - libcontainer container kubepods-burstable-pode6585b09578f24f4c3d281deeed62e90.slice. Oct 28 23:15:16.658712 kubelet[2343]: E1028 23:15:16.658578 2343 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:15:16.661384 kubelet[2343]: I1028 23:15:16.661369 2343 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:15:16.661881 kubelet[2343]: E1028 23:15:16.661856 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Oct 28 23:15:16.697300 kubelet[2343]: E1028 23:15:16.697244 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" Oct 28 23:15:16.699621 kubelet[2343]: I1028 23:15:16.699587 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:16.799908 kubelet[2343]: I1028 23:15:16.799876 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:16.799908 kubelet[2343]: I1028 23:15:16.799910 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 28 23:15:16.799908 kubelet[2343]: I1028 23:15:16.799927 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6585b09578f24f4c3d281deeed62e90-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6585b09578f24f4c3d281deeed62e90\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:16.799908 kubelet[2343]: I1028 23:15:16.799941 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6585b09578f24f4c3d281deeed62e90-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6585b09578f24f4c3d281deeed62e90\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:16.799908 kubelet[2343]: I1028 23:15:16.799956 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:16.800285 kubelet[2343]: I1028 23:15:16.799977 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:16.800285 kubelet[2343]: I1028 23:15:16.800014 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:16.800285 kubelet[2343]: I1028 23:15:16.800072 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6585b09578f24f4c3d281deeed62e90-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6585b09578f24f4c3d281deeed62e90\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:16.863393 kubelet[2343]: I1028 23:15:16.863354 2343 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:15:16.863720 kubelet[2343]: E1028 23:15:16.863701 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Oct 28 23:15:16.963079 kubelet[2343]: E1028 23:15:16.962985 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:16.963873 containerd[1571]: time="2025-10-28T23:15:16.963812054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 28 23:15:16.965298 kubelet[2343]: E1028 23:15:16.965267 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:16.965748 containerd[1571]: time="2025-10-28T23:15:16.965708334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 28 23:15:16.967348 kubelet[2343]: E1028 23:15:16.967133 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:16.967558 containerd[1571]: time="2025-10-28T23:15:16.967529614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6585b09578f24f4c3d281deeed62e90,Namespace:kube-system,Attempt:0,}" Oct 28 23:15:17.098043 kubelet[2343]: E1028 23:15:17.097997 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" Oct 28 23:15:17.264883 kubelet[2343]: I1028 23:15:17.264789 2343 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:15:17.265159 kubelet[2343]: E1028 23:15:17.265125 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Oct 28 23:15:17.434000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033194770.mount: Deactivated successfully. Oct 28 23:15:17.439184 containerd[1571]: time="2025-10-28T23:15:17.439125174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:15:17.441094 containerd[1571]: time="2025-10-28T23:15:17.441063214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 28 23:15:17.441863 containerd[1571]: time="2025-10-28T23:15:17.441832374Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:15:17.442774 containerd[1571]: time="2025-10-28T23:15:17.442723254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 23:15:17.444166 containerd[1571]: time="2025-10-28T23:15:17.444135574Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:15:17.445769 containerd[1571]: time="2025-10-28T23:15:17.445730734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:15:17.445913 containerd[1571]: time="2025-10-28T23:15:17.445894974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 23:15:17.447731 containerd[1571]: time="2025-10-28T23:15:17.447700494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:15:17.449135 containerd[1571]: time="2025-10-28T23:15:17.448593654Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 478.20708ms" Oct 28 23:15:17.449551 containerd[1571]: time="2025-10-28T23:15:17.449513334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 482.13844ms" Oct 28 23:15:17.450473 containerd[1571]: time="2025-10-28T23:15:17.450446894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 479.38468ms" Oct 28 23:15:17.458942 kubelet[2343]: E1028 23:15:17.458911 2343 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 23:15:17.471765 containerd[1571]: time="2025-10-28T23:15:17.471433254Z" level=info msg="connecting to shim 2698ef276a756a8d3d8084e3b5b3e513466b7c453b19e4b6b7d4a0049eb66e65" address="unix:///run/containerd/s/bc935c3e171293b9510f4a8e0d4a2025df2e57fd18453e38ca23a69cd436736a" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:15:17.480040 containerd[1571]: time="2025-10-28T23:15:17.479959494Z" level=info msg="connecting to shim 32070c562bb45a33d254b657df50a352719f381eba2cb4273f3169ab6fcb2a3e" address="unix:///run/containerd/s/fa17058d86298d8c2404e78906351e120c50c73076144e3f87766ffdd32721d6" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:15:17.481805 containerd[1571]: time="2025-10-28T23:15:17.481779494Z" level=info msg="connecting to shim 244c0e475b14648249dda68a181cbb00d6a79ccd6fda150304bb5a09b4206d94" address="unix:///run/containerd/s/d5a4923bb4ab838a3c3f1c2459c8970045bf97594fb9fafa8dc2edc3adc9b00b" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:15:17.502585 systemd[1]: Started cri-containerd-2698ef276a756a8d3d8084e3b5b3e513466b7c453b19e4b6b7d4a0049eb66e65.scope - libcontainer container 2698ef276a756a8d3d8084e3b5b3e513466b7c453b19e4b6b7d4a0049eb66e65. Oct 28 23:15:17.506220 systemd[1]: Started cri-containerd-244c0e475b14648249dda68a181cbb00d6a79ccd6fda150304bb5a09b4206d94.scope - libcontainer container 244c0e475b14648249dda68a181cbb00d6a79ccd6fda150304bb5a09b4206d94. Oct 28 23:15:17.507237 systemd[1]: Started cri-containerd-32070c562bb45a33d254b657df50a352719f381eba2cb4273f3169ab6fcb2a3e.scope - libcontainer container 32070c562bb45a33d254b657df50a352719f381eba2cb4273f3169ab6fcb2a3e. Oct 28 23:15:17.542768 containerd[1571]: time="2025-10-28T23:15:17.542521414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6585b09578f24f4c3d281deeed62e90,Namespace:kube-system,Attempt:0,} returns sandbox id \"244c0e475b14648249dda68a181cbb00d6a79ccd6fda150304bb5a09b4206d94\"" Oct 28 23:15:17.544045 kubelet[2343]: E1028 23:15:17.543986 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:17.548126 containerd[1571]: time="2025-10-28T23:15:17.548090854Z" level=info msg="CreateContainer within sandbox \"244c0e475b14648249dda68a181cbb00d6a79ccd6fda150304bb5a09b4206d94\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 28 23:15:17.548848 containerd[1571]: time="2025-10-28T23:15:17.548807134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2698ef276a756a8d3d8084e3b5b3e513466b7c453b19e4b6b7d4a0049eb66e65\"" Oct 28 23:15:17.550279 kubelet[2343]: E1028 23:15:17.550258 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:17.554411 containerd[1571]: time="2025-10-28T23:15:17.554110374Z" level=info msg="CreateContainer within sandbox \"2698ef276a756a8d3d8084e3b5b3e513466b7c453b19e4b6b7d4a0049eb66e65\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 28 23:15:17.554666 containerd[1571]: time="2025-10-28T23:15:17.554637894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"32070c562bb45a33d254b657df50a352719f381eba2cb4273f3169ab6fcb2a3e\"" Oct 28 23:15:17.555409 kubelet[2343]: E1028 23:15:17.555377 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:17.557326 containerd[1571]: time="2025-10-28T23:15:17.557301534Z" level=info msg="Container 0615eacfd63bee157879ce2908d11f2ea0a3b55b6a20d74d827cfc2033081d36: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:15:17.558457 containerd[1571]: time="2025-10-28T23:15:17.558417014Z" level=info msg="CreateContainer within sandbox \"32070c562bb45a33d254b657df50a352719f381eba2cb4273f3169ab6fcb2a3e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 28 23:15:17.561868 containerd[1571]: time="2025-10-28T23:15:17.561576454Z" level=info msg="Container 1fc25969511498a7b811c1517ef6e024c398c42b5a0ade126416430cd08c91fa: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:15:17.566508 containerd[1571]: time="2025-10-28T23:15:17.566473614Z" level=info msg="CreateContainer within sandbox \"244c0e475b14648249dda68a181cbb00d6a79ccd6fda150304bb5a09b4206d94\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0615eacfd63bee157879ce2908d11f2ea0a3b55b6a20d74d827cfc2033081d36\"" Oct 28 23:15:17.567026 containerd[1571]: time="2025-10-28T23:15:17.566998294Z" level=info msg="StartContainer for \"0615eacfd63bee157879ce2908d11f2ea0a3b55b6a20d74d827cfc2033081d36\"" Oct 28 23:15:17.568179 containerd[1571]: time="2025-10-28T23:15:17.568140254Z" level=info msg="Container f5284cd43a8fec01cee3d5f3e75a5b344a7165d58f4a840a0b35bbae6effa910: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:15:17.568348 containerd[1571]: time="2025-10-28T23:15:17.568320014Z" level=info msg="CreateContainer within sandbox \"2698ef276a756a8d3d8084e3b5b3e513466b7c453b19e4b6b7d4a0049eb66e65\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1fc25969511498a7b811c1517ef6e024c398c42b5a0ade126416430cd08c91fa\"" Oct 28 23:15:17.568480 containerd[1571]: time="2025-10-28T23:15:17.568140254Z" level=info msg="connecting to shim 0615eacfd63bee157879ce2908d11f2ea0a3b55b6a20d74d827cfc2033081d36" address="unix:///run/containerd/s/d5a4923bb4ab838a3c3f1c2459c8970045bf97594fb9fafa8dc2edc3adc9b00b" protocol=ttrpc version=3 Oct 28 23:15:17.568762 containerd[1571]: time="2025-10-28T23:15:17.568710334Z" level=info msg="StartContainer for \"1fc25969511498a7b811c1517ef6e024c398c42b5a0ade126416430cd08c91fa\"" Oct 28 23:15:17.570723 containerd[1571]: time="2025-10-28T23:15:17.570669534Z" level=info msg="connecting to shim 1fc25969511498a7b811c1517ef6e024c398c42b5a0ade126416430cd08c91fa" address="unix:///run/containerd/s/bc935c3e171293b9510f4a8e0d4a2025df2e57fd18453e38ca23a69cd436736a" protocol=ttrpc version=3 Oct 28 23:15:17.577893 containerd[1571]: time="2025-10-28T23:15:17.577761574Z" level=info msg="CreateContainer within sandbox \"32070c562bb45a33d254b657df50a352719f381eba2cb4273f3169ab6fcb2a3e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5284cd43a8fec01cee3d5f3e75a5b344a7165d58f4a840a0b35bbae6effa910\"" Oct 28 23:15:17.578205 containerd[1571]: time="2025-10-28T23:15:17.578166814Z" level=info msg="StartContainer for \"f5284cd43a8fec01cee3d5f3e75a5b344a7165d58f4a840a0b35bbae6effa910\"" Oct 28 23:15:17.579304 containerd[1571]: time="2025-10-28T23:15:17.579277094Z" level=info msg="connecting to shim f5284cd43a8fec01cee3d5f3e75a5b344a7165d58f4a840a0b35bbae6effa910" address="unix:///run/containerd/s/fa17058d86298d8c2404e78906351e120c50c73076144e3f87766ffdd32721d6" protocol=ttrpc version=3 Oct 28 23:15:17.593596 systemd[1]: Started cri-containerd-1fc25969511498a7b811c1517ef6e024c398c42b5a0ade126416430cd08c91fa.scope - libcontainer container 1fc25969511498a7b811c1517ef6e024c398c42b5a0ade126416430cd08c91fa. Oct 28 23:15:17.597211 systemd[1]: Started cri-containerd-0615eacfd63bee157879ce2908d11f2ea0a3b55b6a20d74d827cfc2033081d36.scope - libcontainer container 0615eacfd63bee157879ce2908d11f2ea0a3b55b6a20d74d827cfc2033081d36. Oct 28 23:15:17.598489 systemd[1]: Started cri-containerd-f5284cd43a8fec01cee3d5f3e75a5b344a7165d58f4a840a0b35bbae6effa910.scope - libcontainer container f5284cd43a8fec01cee3d5f3e75a5b344a7165d58f4a840a0b35bbae6effa910. Oct 28 23:15:17.619348 kubelet[2343]: E1028 23:15:17.619243 2343 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 23:15:17.638631 containerd[1571]: time="2025-10-28T23:15:17.638586894Z" level=info msg="StartContainer for \"1fc25969511498a7b811c1517ef6e024c398c42b5a0ade126416430cd08c91fa\" returns successfully" Oct 28 23:15:17.648342 containerd[1571]: time="2025-10-28T23:15:17.648258454Z" level=info msg="StartContainer for \"f5284cd43a8fec01cee3d5f3e75a5b344a7165d58f4a840a0b35bbae6effa910\" returns successfully" Oct 28 23:15:17.650291 containerd[1571]: time="2025-10-28T23:15:17.650256854Z" level=info msg="StartContainer for \"0615eacfd63bee157879ce2908d11f2ea0a3b55b6a20d74d827cfc2033081d36\" returns successfully" Oct 28 23:15:17.704268 kubelet[2343]: E1028 23:15:17.704224 2343 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 23:15:18.066360 kubelet[2343]: I1028 23:15:18.066329 2343 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:15:18.530873 kubelet[2343]: E1028 23:15:18.530841 2343 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:15:18.530994 kubelet[2343]: E1028 23:15:18.530971 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:18.532528 kubelet[2343]: E1028 23:15:18.532509 2343 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:15:18.532636 kubelet[2343]: E1028 23:15:18.532619 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:18.534048 kubelet[2343]: E1028 23:15:18.534006 2343 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:15:18.534136 kubelet[2343]: E1028 23:15:18.534096 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:19.219467 kubelet[2343]: E1028 23:15:19.218576 2343 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 28 23:15:19.303222 kubelet[2343]: I1028 23:15:19.303188 2343 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 23:15:19.303222 kubelet[2343]: E1028 23:15:19.303225 2343 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 28 23:15:19.399731 kubelet[2343]: I1028 23:15:19.399679 2343 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:19.405229 kubelet[2343]: E1028 23:15:19.405196 2343 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:19.405229 kubelet[2343]: I1028 23:15:19.405227 2343 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 23:15:19.407018 kubelet[2343]: E1028 23:15:19.406991 2343 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 28 23:15:19.407018 kubelet[2343]: I1028 23:15:19.407017 2343 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:19.408577 kubelet[2343]: E1028 23:15:19.408555 2343 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:19.490201 kubelet[2343]: I1028 23:15:19.489768 2343 apiserver.go:52] "Watching apiserver" Oct 28 23:15:19.499506 kubelet[2343]: I1028 23:15:19.499477 2343 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 23:15:19.534985 kubelet[2343]: I1028 23:15:19.534952 2343 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:19.535343 kubelet[2343]: I1028 23:15:19.535323 2343 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 23:15:19.537267 kubelet[2343]: E1028 23:15:19.537241 2343 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:19.537404 kubelet[2343]: E1028 23:15:19.537384 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:19.537786 kubelet[2343]: E1028 23:15:19.537758 2343 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 28 23:15:19.537878 kubelet[2343]: E1028 23:15:19.537862 2343 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:21.225797 systemd[1]: Reload requested from client PID 2633 ('systemctl') (unit session-8.scope)... Oct 28 23:15:21.226083 systemd[1]: Reloading... Oct 28 23:15:21.301468 zram_generator::config[2677]: No configuration found. Oct 28 23:15:21.456502 systemd[1]: Reloading finished in 230 ms. Oct 28 23:15:21.479858 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:15:21.496301 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 23:15:21.496550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:15:21.496600 systemd[1]: kubelet.service: Consumed 967ms CPU time, 123.8M memory peak. Oct 28 23:15:21.498155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:15:21.614194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:15:21.618681 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 23:15:21.657729 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 23:15:21.657729 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 23:15:21.658024 kubelet[2719]: I1028 23:15:21.657771 2719 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 23:15:21.663367 kubelet[2719]: I1028 23:15:21.663334 2719 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 23:15:21.663367 kubelet[2719]: I1028 23:15:21.663363 2719 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 23:15:21.663477 kubelet[2719]: I1028 23:15:21.663391 2719 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 23:15:21.663477 kubelet[2719]: I1028 23:15:21.663397 2719 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 23:15:21.663676 kubelet[2719]: I1028 23:15:21.663660 2719 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 23:15:21.665927 kubelet[2719]: I1028 23:15:21.665070 2719 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 28 23:15:21.667782 kubelet[2719]: I1028 23:15:21.667755 2719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 23:15:21.670738 kubelet[2719]: I1028 23:15:21.670717 2719 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 23:15:21.673123 kubelet[2719]: I1028 23:15:21.673105 2719 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 23:15:21.673312 kubelet[2719]: I1028 23:15:21.673289 2719 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 23:15:21.673502 kubelet[2719]: I1028 23:15:21.673312 2719 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 23:15:21.673581 kubelet[2719]: I1028 23:15:21.673507 2719 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 23:15:21.673581 kubelet[2719]: I1028 23:15:21.673516 2719 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 23:15:21.673581 kubelet[2719]: I1028 23:15:21.673544 2719 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 23:15:21.674443 kubelet[2719]: I1028 23:15:21.674405 2719 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:15:21.674579 kubelet[2719]: I1028 23:15:21.674566 2719 kubelet.go:475] "Attempting to sync node with API server" Oct 28 23:15:21.674606 kubelet[2719]: I1028 23:15:21.674582 2719 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 23:15:21.674606 kubelet[2719]: I1028 23:15:21.674604 2719 kubelet.go:387] "Adding apiserver pod source" Oct 28 23:15:21.674652 kubelet[2719]: I1028 23:15:21.674617 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 23:15:21.675663 kubelet[2719]: I1028 23:15:21.675644 2719 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 23:15:21.676226 kubelet[2719]: I1028 23:15:21.676203 2719 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 23:15:21.676270 kubelet[2719]: I1028 23:15:21.676238 2719 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 23:15:21.678249 kubelet[2719]: I1028 23:15:21.677952 2719 server.go:1262] "Started kubelet" Oct 28 23:15:21.678793 kubelet[2719]: I1028 23:15:21.678772 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 23:15:21.682819 kubelet[2719]: E1028 23:15:21.682762 2719 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 23:15:21.686620 kubelet[2719]: I1028 23:15:21.686595 2719 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 23:15:21.686680 kubelet[2719]: I1028 23:15:21.686644 2719 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 23:15:21.686902 kubelet[2719]: E1028 23:15:21.686773 2719 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:15:21.686902 kubelet[2719]: I1028 23:15:21.686774 2719 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 23:15:21.686902 kubelet[2719]: I1028 23:15:21.686825 2719 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 23:15:21.687084 kubelet[2719]: I1028 23:15:21.687011 2719 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 23:15:21.687084 kubelet[2719]: I1028 23:15:21.687063 2719 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 23:15:21.687185 kubelet[2719]: I1028 23:15:21.687166 2719 reconciler.go:29] "Reconciler: start to sync state" Oct 28 23:15:21.688861 kubelet[2719]: I1028 23:15:21.688843 2719 server.go:310] "Adding debug handlers to kubelet server" Oct 28 23:15:21.690084 kubelet[2719]: I1028 23:15:21.689823 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 23:15:21.692920 kubelet[2719]: I1028 23:15:21.692692 2719 factory.go:223] Registration of the systemd container factory successfully Oct 28 23:15:21.692920 kubelet[2719]: I1028 23:15:21.692804 2719 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 23:15:21.695859 kubelet[2719]: I1028 23:15:21.695837 2719 factory.go:223] Registration of the containerd container factory successfully Oct 28 23:15:21.701695 kubelet[2719]: I1028 23:15:21.701663 2719 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 23:15:21.702608 kubelet[2719]: I1028 23:15:21.702579 2719 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 23:15:21.702608 kubelet[2719]: I1028 23:15:21.702600 2719 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 23:15:21.702679 kubelet[2719]: I1028 23:15:21.702621 2719 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 23:15:21.702679 kubelet[2719]: E1028 23:15:21.702659 2719 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 23:15:21.738529 kubelet[2719]: I1028 23:15:21.737926 2719 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 23:15:21.738529 kubelet[2719]: I1028 23:15:21.737945 2719 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 23:15:21.738529 kubelet[2719]: I1028 23:15:21.737964 2719 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:15:21.738529 kubelet[2719]: I1028 23:15:21.738079 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 28 23:15:21.738529 kubelet[2719]: I1028 23:15:21.738088 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 28 23:15:21.738529 kubelet[2719]: I1028 23:15:21.738102 2719 policy_none.go:49] "None policy: Start" Oct 28 23:15:21.738529 kubelet[2719]: I1028 23:15:21.738110 2719 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 23:15:21.738529 kubelet[2719]: I1028 23:15:21.738118 2719 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 23:15:21.739824 kubelet[2719]: I1028 23:15:21.739137 2719 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 28 23:15:21.739824 kubelet[2719]: I1028 23:15:21.739161 2719 policy_none.go:47] "Start" Oct 28 23:15:21.743381 kubelet[2719]: E1028 23:15:21.743362 2719 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 23:15:21.744040 kubelet[2719]: I1028 23:15:21.743974 2719 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 23:15:21.744040 kubelet[2719]: I1028 23:15:21.743991 2719 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 23:15:21.744446 kubelet[2719]: I1028 23:15:21.744213 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 23:15:21.745244 kubelet[2719]: E1028 23:15:21.744759 2719 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 23:15:21.803509 kubelet[2719]: I1028 23:15:21.803475 2719 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 23:15:21.803687 kubelet[2719]: I1028 23:15:21.803655 2719 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:21.803734 kubelet[2719]: I1028 23:15:21.803606 2719 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:21.848545 kubelet[2719]: I1028 23:15:21.848519 2719 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:15:21.854795 kubelet[2719]: I1028 23:15:21.854180 2719 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 28 23:15:21.854795 kubelet[2719]: I1028 23:15:21.854260 2719 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 23:15:21.988181 kubelet[2719]: I1028 23:15:21.988147 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6585b09578f24f4c3d281deeed62e90-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6585b09578f24f4c3d281deeed62e90\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:21.988339 kubelet[2719]: I1028 23:15:21.988320 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6585b09578f24f4c3d281deeed62e90-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6585b09578f24f4c3d281deeed62e90\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:21.988452 kubelet[2719]: I1028 23:15:21.988414 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:21.988487 kubelet[2719]: I1028 23:15:21.988464 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:21.988601 kubelet[2719]: I1028 23:15:21.988489 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:21.988601 kubelet[2719]: I1028 23:15:21.988504 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:21.988601 kubelet[2719]: I1028 23:15:21.988523 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:15:21.988601 kubelet[2719]: I1028 23:15:21.988568 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 28 23:15:21.988601 kubelet[2719]: I1028 23:15:21.988595 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6585b09578f24f4c3d281deeed62e90-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6585b09578f24f4c3d281deeed62e90\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:15:22.109499 kubelet[2719]: E1028 23:15:22.109461 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:22.110871 kubelet[2719]: E1028 23:15:22.110572 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:22.110871 kubelet[2719]: E1028 23:15:22.110649 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:22.675761 kubelet[2719]: I1028 23:15:22.675715 2719 apiserver.go:52] "Watching apiserver" Oct 28 23:15:22.722912 kubelet[2719]: I1028 23:15:22.722885 2719 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 23:15:22.738683 kubelet[2719]: E1028 23:15:22.738573 2719 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 23:15:22.739310 kubelet[2719]: E1028 23:15:22.738777 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:22.741591 kubelet[2719]: E1028 23:15:22.740906 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:22.742452 kubelet[2719]: E1028 23:15:22.741886 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:22.762332 kubelet[2719]: I1028 23:15:22.762278 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.762264214 podStartE2EDuration="1.762264214s" podCreationTimestamp="2025-10-28 23:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:15:22.761856494 +0000 UTC m=+1.140819681" watchObservedRunningTime="2025-10-28 23:15:22.762264214 +0000 UTC m=+1.141227401" Oct 28 23:15:22.784922 kubelet[2719]: I1028 23:15:22.784869 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.784856134 podStartE2EDuration="1.784856134s" podCreationTimestamp="2025-10-28 23:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:15:22.772780654 +0000 UTC m=+1.151743841" watchObservedRunningTime="2025-10-28 23:15:22.784856134 +0000 UTC m=+1.163819321" Oct 28 23:15:22.788015 kubelet[2719]: I1028 23:15:22.787971 2719 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 23:15:23.723978 kubelet[2719]: E1028 23:15:23.723917 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:23.724487 kubelet[2719]: E1028 23:15:23.724468 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:25.650243 kubelet[2719]: E1028 23:15:25.650202 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:26.973806 kubelet[2719]: I1028 23:15:26.973772 2719 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 28 23:15:26.974574 containerd[1571]: time="2025-10-28T23:15:26.974483563Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 28 23:15:26.974821 kubelet[2719]: I1028 23:15:26.974667 2719 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 28 23:15:28.026668 kubelet[2719]: I1028 23:15:28.026482 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.02646361 podStartE2EDuration="7.02646361s" podCreationTimestamp="2025-10-28 23:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:15:22.784976774 +0000 UTC m=+1.163939961" watchObservedRunningTime="2025-10-28 23:15:28.02646361 +0000 UTC m=+6.405426797" Oct 28 23:15:28.038271 systemd[1]: Created slice kubepods-besteffort-pod77ac0983_8da0_4dda_b022_065c4b68f3d4.slice - libcontainer container kubepods-besteffort-pod77ac0983_8da0_4dda_b022_065c4b68f3d4.slice. Oct 28 23:15:28.082453 kubelet[2719]: E1028 23:15:28.082373 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:28.129124 kubelet[2719]: I1028 23:15:28.129078 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77ac0983-8da0-4dda-b022-065c4b68f3d4-xtables-lock\") pod \"kube-proxy-hgf59\" (UID: \"77ac0983-8da0-4dda-b022-065c4b68f3d4\") " pod="kube-system/kube-proxy-hgf59" Oct 28 23:15:28.129124 kubelet[2719]: I1028 23:15:28.129116 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77ac0983-8da0-4dda-b022-065c4b68f3d4-lib-modules\") pod \"kube-proxy-hgf59\" (UID: \"77ac0983-8da0-4dda-b022-065c4b68f3d4\") " pod="kube-system/kube-proxy-hgf59" Oct 28 23:15:28.129124 kubelet[2719]: I1028 23:15:28.129135 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77ac0983-8da0-4dda-b022-065c4b68f3d4-kube-proxy\") pod \"kube-proxy-hgf59\" (UID: \"77ac0983-8da0-4dda-b022-065c4b68f3d4\") " pod="kube-system/kube-proxy-hgf59" Oct 28 23:15:28.129288 kubelet[2719]: I1028 23:15:28.129177 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdmwp\" (UniqueName: \"kubernetes.io/projected/77ac0983-8da0-4dda-b022-065c4b68f3d4-kube-api-access-rdmwp\") pod \"kube-proxy-hgf59\" (UID: \"77ac0983-8da0-4dda-b022-065c4b68f3d4\") " pod="kube-system/kube-proxy-hgf59" Oct 28 23:15:28.189531 systemd[1]: Created slice kubepods-besteffort-pod04b483ea_9fe6_4b9f_b659_ef633967e54e.slice - libcontainer container kubepods-besteffort-pod04b483ea_9fe6_4b9f_b659_ef633967e54e.slice. Oct 28 23:15:28.230175 kubelet[2719]: I1028 23:15:28.230127 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v284b\" (UniqueName: \"kubernetes.io/projected/04b483ea-9fe6-4b9f-b659-ef633967e54e-kube-api-access-v284b\") pod \"tigera-operator-65cdcdfd6d-q6t6d\" (UID: \"04b483ea-9fe6-4b9f-b659-ef633967e54e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-q6t6d" Oct 28 23:15:28.230175 kubelet[2719]: I1028 23:15:28.230189 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04b483ea-9fe6-4b9f-b659-ef633967e54e-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-q6t6d\" (UID: \"04b483ea-9fe6-4b9f-b659-ef633967e54e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-q6t6d" Oct 28 23:15:28.354558 kubelet[2719]: E1028 23:15:28.354495 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:28.356213 containerd[1571]: time="2025-10-28T23:15:28.356099043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hgf59,Uid:77ac0983-8da0-4dda-b022-065c4b68f3d4,Namespace:kube-system,Attempt:0,}" Oct 28 23:15:28.371102 containerd[1571]: time="2025-10-28T23:15:28.371042396Z" level=info msg="connecting to shim d30f4b90abde5ffb538e83acdf1dc06663d6b250cd6a83f791e9ad06ecaca50d" address="unix:///run/containerd/s/e2e53301772d11fb44e6df226c69db601e21df6393624a5038c110ae96f36139" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:15:28.400597 systemd[1]: Started cri-containerd-d30f4b90abde5ffb538e83acdf1dc06663d6b250cd6a83f791e9ad06ecaca50d.scope - libcontainer container d30f4b90abde5ffb538e83acdf1dc06663d6b250cd6a83f791e9ad06ecaca50d. Oct 28 23:15:28.421940 containerd[1571]: time="2025-10-28T23:15:28.421900347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hgf59,Uid:77ac0983-8da0-4dda-b022-065c4b68f3d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d30f4b90abde5ffb538e83acdf1dc06663d6b250cd6a83f791e9ad06ecaca50d\"" Oct 28 23:15:28.422642 kubelet[2719]: E1028 23:15:28.422611 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:28.428791 containerd[1571]: time="2025-10-28T23:15:28.428735751Z" level=info msg="CreateContainer within sandbox \"d30f4b90abde5ffb538e83acdf1dc06663d6b250cd6a83f791e9ad06ecaca50d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 28 23:15:28.437951 containerd[1571]: time="2025-10-28T23:15:28.437920928Z" level=info msg="Container ee80083f5cffd03fe93b065988c0980e076c459242f28de39c3c7b733ee60a38: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:15:28.447290 containerd[1571]: time="2025-10-28T23:15:28.447174424Z" level=info msg="CreateContainer within sandbox \"d30f4b90abde5ffb538e83acdf1dc06663d6b250cd6a83f791e9ad06ecaca50d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ee80083f5cffd03fe93b065988c0980e076c459242f28de39c3c7b733ee60a38\"" Oct 28 23:15:28.448533 containerd[1571]: time="2025-10-28T23:15:28.448507849Z" level=info msg="StartContainer for \"ee80083f5cffd03fe93b065988c0980e076c459242f28de39c3c7b733ee60a38\"" Oct 28 23:15:28.449900 containerd[1571]: time="2025-10-28T23:15:28.449877514Z" level=info msg="connecting to shim ee80083f5cffd03fe93b065988c0980e076c459242f28de39c3c7b733ee60a38" address="unix:///run/containerd/s/e2e53301772d11fb44e6df226c69db601e21df6393624a5038c110ae96f36139" protocol=ttrpc version=3 Oct 28 23:15:28.472594 systemd[1]: Started cri-containerd-ee80083f5cffd03fe93b065988c0980e076c459242f28de39c3c7b733ee60a38.scope - libcontainer container ee80083f5cffd03fe93b065988c0980e076c459242f28de39c3c7b733ee60a38. Oct 28 23:15:28.496548 containerd[1571]: time="2025-10-28T23:15:28.496509193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-q6t6d,Uid:04b483ea-9fe6-4b9f-b659-ef633967e54e,Namespace:tigera-operator,Attempt:0,}" Oct 28 23:15:28.504695 containerd[1571]: time="2025-10-28T23:15:28.504658661Z" level=info msg="StartContainer for \"ee80083f5cffd03fe93b065988c0980e076c459242f28de39c3c7b733ee60a38\" returns successfully" Oct 28 23:15:28.517472 containerd[1571]: time="2025-10-28T23:15:28.516951284Z" level=info msg="connecting to shim 113a6b53de685b995470999a6ac1e584b2c63cd024d810da23096bb9e21061fa" address="unix:///run/containerd/s/8653bb61a11a7d44bb734886499e8148c498fffbcbd6a64bdf7bf64430854127" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:15:28.539603 systemd[1]: Started cri-containerd-113a6b53de685b995470999a6ac1e584b2c63cd024d810da23096bb9e21061fa.scope - libcontainer container 113a6b53de685b995470999a6ac1e584b2c63cd024d810da23096bb9e21061fa. Oct 28 23:15:28.575110 containerd[1571]: time="2025-10-28T23:15:28.575062634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-q6t6d,Uid:04b483ea-9fe6-4b9f-b659-ef633967e54e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"113a6b53de685b995470999a6ac1e584b2c63cd024d810da23096bb9e21061fa\"" Oct 28 23:15:28.576941 containerd[1571]: time="2025-10-28T23:15:28.576913973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 28 23:15:28.735100 kubelet[2719]: E1028 23:15:28.734905 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:28.737498 kubelet[2719]: E1028 23:15:28.737469 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:29.738291 kubelet[2719]: E1028 23:15:29.736206 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:29.760636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682196303.mount: Deactivated successfully. Oct 28 23:15:31.051270 kubelet[2719]: E1028 23:15:31.051211 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:31.054862 containerd[1571]: time="2025-10-28T23:15:31.054812897Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:31.055285 containerd[1571]: time="2025-10-28T23:15:31.055253733Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 28 23:15:31.056658 containerd[1571]: time="2025-10-28T23:15:31.056621720Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:31.059612 containerd[1571]: time="2025-10-28T23:15:31.059533174Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:31.060332 containerd[1571]: time="2025-10-28T23:15:31.060303366Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.483355394s" Oct 28 23:15:31.060510 containerd[1571]: time="2025-10-28T23:15:31.060399526Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 28 23:15:31.066319 containerd[1571]: time="2025-10-28T23:15:31.066282071Z" level=info msg="CreateContainer within sandbox \"113a6b53de685b995470999a6ac1e584b2c63cd024d810da23096bb9e21061fa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 28 23:15:31.069029 kubelet[2719]: I1028 23:15:31.068974 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hgf59" podStartSLOduration=3.068959807 podStartE2EDuration="3.068959807s" podCreationTimestamp="2025-10-28 23:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:15:28.7515065 +0000 UTC m=+7.130469687" watchObservedRunningTime="2025-10-28 23:15:31.068959807 +0000 UTC m=+9.447922994" Oct 28 23:15:31.081624 containerd[1571]: time="2025-10-28T23:15:31.081502971Z" level=info msg="Container 34dfffb0456695cf557788169963df22abac19df4fe355dd7eb1ca3045bd3d54: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:15:31.087452 containerd[1571]: time="2025-10-28T23:15:31.087345757Z" level=info msg="CreateContainer within sandbox \"113a6b53de685b995470999a6ac1e584b2c63cd024d810da23096bb9e21061fa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34dfffb0456695cf557788169963df22abac19df4fe355dd7eb1ca3045bd3d54\"" Oct 28 23:15:31.088080 containerd[1571]: time="2025-10-28T23:15:31.087926152Z" level=info msg="StartContainer for \"34dfffb0456695cf557788169963df22abac19df4fe355dd7eb1ca3045bd3d54\"" Oct 28 23:15:31.088901 containerd[1571]: time="2025-10-28T23:15:31.088874903Z" level=info msg="connecting to shim 34dfffb0456695cf557788169963df22abac19df4fe355dd7eb1ca3045bd3d54" address="unix:///run/containerd/s/8653bb61a11a7d44bb734886499e8148c498fffbcbd6a64bdf7bf64430854127" protocol=ttrpc version=3 Oct 28 23:15:31.126608 systemd[1]: Started cri-containerd-34dfffb0456695cf557788169963df22abac19df4fe355dd7eb1ca3045bd3d54.scope - libcontainer container 34dfffb0456695cf557788169963df22abac19df4fe355dd7eb1ca3045bd3d54. Oct 28 23:15:31.154909 containerd[1571]: time="2025-10-28T23:15:31.154864095Z" level=info msg="StartContainer for \"34dfffb0456695cf557788169963df22abac19df4fe355dd7eb1ca3045bd3d54\" returns successfully" Oct 28 23:15:31.741788 kubelet[2719]: E1028 23:15:31.741746 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:31.751777 kubelet[2719]: I1028 23:15:31.751714 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-q6t6d" podStartSLOduration=1.266888134 podStartE2EDuration="3.751698194s" podCreationTimestamp="2025-10-28 23:15:28 +0000 UTC" firstStartedPulling="2025-10-28 23:15:28.576549257 +0000 UTC m=+6.955512444" lastFinishedPulling="2025-10-28 23:15:31.061359317 +0000 UTC m=+9.440322504" observedRunningTime="2025-10-28 23:15:31.751502356 +0000 UTC m=+10.130465543" watchObservedRunningTime="2025-10-28 23:15:31.751698194 +0000 UTC m=+10.130661381" Oct 28 23:15:35.658753 kubelet[2719]: E1028 23:15:35.658678 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:35.756967 kubelet[2719]: E1028 23:15:35.756938 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:36.600891 sudo[1789]: pam_unix(sudo:session): session closed for user root Oct 28 23:15:36.602252 sshd[1788]: Connection closed by 10.0.0.1 port 41352 Oct 28 23:15:36.602785 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Oct 28 23:15:36.606987 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:41352.service: Deactivated successfully. Oct 28 23:15:36.611943 systemd[1]: session-8.scope: Deactivated successfully. Oct 28 23:15:36.612132 systemd[1]: session-8.scope: Consumed 7.531s CPU time, 213.6M memory peak. Oct 28 23:15:36.615758 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Oct 28 23:15:36.617255 systemd-logind[1547]: Removed session 8. Oct 28 23:15:38.431605 update_engine[1552]: I20251028 23:15:38.430456 1552 update_attempter.cc:509] Updating boot flags... Oct 28 23:15:45.929263 systemd[1]: Created slice kubepods-besteffort-pod766b5ceb_fc55_4ba7_88e4_07bdf94f194b.slice - libcontainer container kubepods-besteffort-pod766b5ceb_fc55_4ba7_88e4_07bdf94f194b.slice. Oct 28 23:15:45.954964 kubelet[2719]: I1028 23:15:45.954921 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/766b5ceb-fc55-4ba7-88e4-07bdf94f194b-tigera-ca-bundle\") pod \"calico-typha-75656ccf98-zttf4\" (UID: \"766b5ceb-fc55-4ba7-88e4-07bdf94f194b\") " pod="calico-system/calico-typha-75656ccf98-zttf4" Oct 28 23:15:45.955503 kubelet[2719]: I1028 23:15:45.955392 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/766b5ceb-fc55-4ba7-88e4-07bdf94f194b-typha-certs\") pod \"calico-typha-75656ccf98-zttf4\" (UID: \"766b5ceb-fc55-4ba7-88e4-07bdf94f194b\") " pod="calico-system/calico-typha-75656ccf98-zttf4" Oct 28 23:15:45.955503 kubelet[2719]: I1028 23:15:45.955471 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-982fc\" (UniqueName: \"kubernetes.io/projected/766b5ceb-fc55-4ba7-88e4-07bdf94f194b-kube-api-access-982fc\") pod \"calico-typha-75656ccf98-zttf4\" (UID: \"766b5ceb-fc55-4ba7-88e4-07bdf94f194b\") " pod="calico-system/calico-typha-75656ccf98-zttf4" Oct 28 23:15:46.125840 systemd[1]: Created slice kubepods-besteffort-pod6c4f8101_142d_4300_b32f_619b426999f4.slice - libcontainer container kubepods-besteffort-pod6c4f8101_142d_4300_b32f_619b426999f4.slice. Oct 28 23:15:46.157393 kubelet[2719]: I1028 23:15:46.157336 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6c4f8101-142d-4300-b32f-619b426999f4-node-certs\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157524 kubelet[2719]: I1028 23:15:46.157401 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6c4f8101-142d-4300-b32f-619b426999f4-var-run-calico\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157524 kubelet[2719]: I1028 23:15:46.157452 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6c4f8101-142d-4300-b32f-619b426999f4-cni-net-dir\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157524 kubelet[2719]: I1028 23:15:46.157488 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c4f8101-142d-4300-b32f-619b426999f4-xtables-lock\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157598 kubelet[2719]: I1028 23:15:46.157583 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bcnq\" (UniqueName: \"kubernetes.io/projected/6c4f8101-142d-4300-b32f-619b426999f4-kube-api-access-2bcnq\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157619 kubelet[2719]: I1028 23:15:46.157603 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6c4f8101-142d-4300-b32f-619b426999f4-cni-bin-dir\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157619 kubelet[2719]: I1028 23:15:46.157616 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6c4f8101-142d-4300-b32f-619b426999f4-policysync\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157672 kubelet[2719]: I1028 23:15:46.157629 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6c4f8101-142d-4300-b32f-619b426999f4-cni-log-dir\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157672 kubelet[2719]: I1028 23:15:46.157654 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c4f8101-142d-4300-b32f-619b426999f4-lib-modules\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157750 kubelet[2719]: I1028 23:15:46.157705 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6c4f8101-142d-4300-b32f-619b426999f4-flexvol-driver-host\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157782 kubelet[2719]: I1028 23:15:46.157756 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c4f8101-142d-4300-b32f-619b426999f4-tigera-ca-bundle\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.157808 kubelet[2719]: I1028 23:15:46.157800 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6c4f8101-142d-4300-b32f-619b426999f4-var-lib-calico\") pod \"calico-node-2rcch\" (UID: \"6c4f8101-142d-4300-b32f-619b426999f4\") " pod="calico-system/calico-node-2rcch" Oct 28 23:15:46.239915 kubelet[2719]: E1028 23:15:46.239086 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:46.240055 containerd[1571]: time="2025-10-28T23:15:46.239658425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75656ccf98-zttf4,Uid:766b5ceb-fc55-4ba7-88e4-07bdf94f194b,Namespace:calico-system,Attempt:0,}" Oct 28 23:15:46.263897 kubelet[2719]: E1028 23:15:46.263466 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.263897 kubelet[2719]: W1028 23:15:46.263498 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.263897 kubelet[2719]: E1028 23:15:46.263534 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.263897 kubelet[2719]: E1028 23:15:46.263761 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.263897 kubelet[2719]: W1028 23:15:46.263770 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.263897 kubelet[2719]: E1028 23:15:46.263780 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.271756 kubelet[2719]: E1028 23:15:46.271736 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.271756 kubelet[2719]: W1028 23:15:46.271752 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.271961 kubelet[2719]: E1028 23:15:46.271765 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.282791 containerd[1571]: time="2025-10-28T23:15:46.282359275Z" level=info msg="connecting to shim 0d1b365507538379f2a9867e1fa9360c2e1b1d66922226666b0cc9610d542bd7" address="unix:///run/containerd/s/9f3a35bcf2898901fdd9c59155f67a6afdbe3e5b91b00af2e300bb5f95e174f9" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:15:46.309705 kubelet[2719]: E1028 23:15:46.309662 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4zdp" podUID="6b1cc5e3-6c35-4356-8831-57857e48a65e" Oct 28 23:15:46.313731 systemd[1]: Started cri-containerd-0d1b365507538379f2a9867e1fa9360c2e1b1d66922226666b0cc9610d542bd7.scope - libcontainer container 0d1b365507538379f2a9867e1fa9360c2e1b1d66922226666b0cc9610d542bd7. Oct 28 23:15:46.348975 kubelet[2719]: E1028 23:15:46.348947 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.348975 kubelet[2719]: W1028 23:15:46.348975 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.349174 kubelet[2719]: E1028 23:15:46.348997 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.349174 kubelet[2719]: E1028 23:15:46.349116 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.349174 kubelet[2719]: W1028 23:15:46.349124 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.349235 kubelet[2719]: E1028 23:15:46.349179 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.349345 kubelet[2719]: E1028 23:15:46.349330 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.349345 kubelet[2719]: W1028 23:15:46.349342 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.349415 kubelet[2719]: E1028 23:15:46.349351 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.349545 kubelet[2719]: E1028 23:15:46.349530 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.349545 kubelet[2719]: W1028 23:15:46.349542 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.349588 kubelet[2719]: E1028 23:15:46.349552 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.349735 kubelet[2719]: E1028 23:15:46.349722 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.349735 kubelet[2719]: W1028 23:15:46.349733 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.349788 kubelet[2719]: E1028 23:15:46.349742 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.351523 kubelet[2719]: E1028 23:15:46.351506 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.351523 kubelet[2719]: W1028 23:15:46.351521 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.351592 kubelet[2719]: E1028 23:15:46.351532 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.351676 kubelet[2719]: E1028 23:15:46.351663 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.351676 kubelet[2719]: W1028 23:15:46.351676 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.351726 kubelet[2719]: E1028 23:15:46.351684 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.351801 kubelet[2719]: E1028 23:15:46.351791 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.351823 kubelet[2719]: W1028 23:15:46.351800 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.351823 kubelet[2719]: E1028 23:15:46.351808 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.351961 kubelet[2719]: E1028 23:15:46.351938 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.351961 kubelet[2719]: W1028 23:15:46.351949 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.351961 kubelet[2719]: E1028 23:15:46.351957 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.352066 kubelet[2719]: E1028 23:15:46.352055 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.352066 kubelet[2719]: W1028 23:15:46.352064 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.352109 kubelet[2719]: E1028 23:15:46.352072 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.352174 kubelet[2719]: E1028 23:15:46.352166 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.352196 kubelet[2719]: W1028 23:15:46.352174 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.352196 kubelet[2719]: E1028 23:15:46.352181 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.352288 kubelet[2719]: E1028 23:15:46.352279 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.352310 kubelet[2719]: W1028 23:15:46.352288 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.352310 kubelet[2719]: E1028 23:15:46.352295 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.352416 kubelet[2719]: E1028 23:15:46.352404 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.352416 kubelet[2719]: W1028 23:15:46.352414 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.352483 kubelet[2719]: E1028 23:15:46.352421 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.352550 kubelet[2719]: E1028 23:15:46.352537 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.352550 kubelet[2719]: W1028 23:15:46.352548 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.352590 kubelet[2719]: E1028 23:15:46.352556 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.352669 kubelet[2719]: E1028 23:15:46.352660 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.352689 kubelet[2719]: W1028 23:15:46.352670 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.352689 kubelet[2719]: E1028 23:15:46.352677 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.352784 kubelet[2719]: E1028 23:15:46.352775 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.352807 kubelet[2719]: W1028 23:15:46.352784 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.352807 kubelet[2719]: E1028 23:15:46.352792 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.352914 kubelet[2719]: E1028 23:15:46.352905 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.352935 kubelet[2719]: W1028 23:15:46.352914 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.352935 kubelet[2719]: E1028 23:15:46.352922 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.353024 kubelet[2719]: E1028 23:15:46.353014 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.353024 kubelet[2719]: W1028 23:15:46.353023 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.353063 kubelet[2719]: E1028 23:15:46.353030 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.353130 kubelet[2719]: E1028 23:15:46.353121 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.353151 kubelet[2719]: W1028 23:15:46.353130 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.353151 kubelet[2719]: E1028 23:15:46.353137 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.353239 kubelet[2719]: E1028 23:15:46.353230 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.353260 kubelet[2719]: W1028 23:15:46.353239 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.353260 kubelet[2719]: E1028 23:15:46.353246 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.359907 kubelet[2719]: E1028 23:15:46.359599 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.359982 kubelet[2719]: W1028 23:15:46.359909 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.359982 kubelet[2719]: E1028 23:15:46.359925 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.359982 kubelet[2719]: I1028 23:15:46.359949 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6b1cc5e3-6c35-4356-8831-57857e48a65e-registration-dir\") pod \"csi-node-driver-n4zdp\" (UID: \"6b1cc5e3-6c35-4356-8831-57857e48a65e\") " pod="calico-system/csi-node-driver-n4zdp" Oct 28 23:15:46.360546 kubelet[2719]: E1028 23:15:46.360110 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.360546 kubelet[2719]: W1028 23:15:46.360123 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.360546 kubelet[2719]: E1028 23:15:46.360132 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.360546 kubelet[2719]: I1028 23:15:46.360157 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b1cc5e3-6c35-4356-8831-57857e48a65e-kubelet-dir\") pod \"csi-node-driver-n4zdp\" (UID: \"6b1cc5e3-6c35-4356-8831-57857e48a65e\") " pod="calico-system/csi-node-driver-n4zdp" Oct 28 23:15:46.361071 kubelet[2719]: E1028 23:15:46.361018 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.361071 kubelet[2719]: W1028 23:15:46.361065 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.361162 kubelet[2719]: E1028 23:15:46.361079 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.361162 kubelet[2719]: I1028 23:15:46.361100 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6b1cc5e3-6c35-4356-8831-57857e48a65e-varrun\") pod \"csi-node-driver-n4zdp\" (UID: \"6b1cc5e3-6c35-4356-8831-57857e48a65e\") " pod="calico-system/csi-node-driver-n4zdp" Oct 28 23:15:46.361269 kubelet[2719]: E1028 23:15:46.361254 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.361269 kubelet[2719]: W1028 23:15:46.361267 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.361325 kubelet[2719]: E1028 23:15:46.361276 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.361366 kubelet[2719]: I1028 23:15:46.361348 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5d57\" (UniqueName: \"kubernetes.io/projected/6b1cc5e3-6c35-4356-8831-57857e48a65e-kube-api-access-v5d57\") pod \"csi-node-driver-n4zdp\" (UID: \"6b1cc5e3-6c35-4356-8831-57857e48a65e\") " pod="calico-system/csi-node-driver-n4zdp" Oct 28 23:15:46.361603 kubelet[2719]: E1028 23:15:46.361585 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.361603 kubelet[2719]: W1028 23:15:46.361599 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.361670 kubelet[2719]: E1028 23:15:46.361610 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.361774 kubelet[2719]: E1028 23:15:46.361761 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.361774 kubelet[2719]: W1028 23:15:46.361773 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.361819 kubelet[2719]: E1028 23:15:46.361783 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.361977 kubelet[2719]: E1028 23:15:46.361965 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.361977 kubelet[2719]: W1028 23:15:46.361974 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.362027 kubelet[2719]: E1028 23:15:46.361982 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.362116 kubelet[2719]: E1028 23:15:46.362105 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.362116 kubelet[2719]: W1028 23:15:46.362114 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.362157 kubelet[2719]: E1028 23:15:46.362122 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.362157 kubelet[2719]: I1028 23:15:46.362143 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6b1cc5e3-6c35-4356-8831-57857e48a65e-socket-dir\") pod \"csi-node-driver-n4zdp\" (UID: \"6b1cc5e3-6c35-4356-8831-57857e48a65e\") " pod="calico-system/csi-node-driver-n4zdp" Oct 28 23:15:46.362580 kubelet[2719]: E1028 23:15:46.362558 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.362580 kubelet[2719]: W1028 23:15:46.362575 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.362678 kubelet[2719]: E1028 23:15:46.362587 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.362764 kubelet[2719]: E1028 23:15:46.362750 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.362764 kubelet[2719]: W1028 23:15:46.362762 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.362813 kubelet[2719]: E1028 23:15:46.362772 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.363519 kubelet[2719]: E1028 23:15:46.363502 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.363519 kubelet[2719]: W1028 23:15:46.363517 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.363580 kubelet[2719]: E1028 23:15:46.363528 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.363676 kubelet[2719]: E1028 23:15:46.363664 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.363676 kubelet[2719]: W1028 23:15:46.363673 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.363723 kubelet[2719]: E1028 23:15:46.363681 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.363809 kubelet[2719]: E1028 23:15:46.363798 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.363809 kubelet[2719]: W1028 23:15:46.363807 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.363853 kubelet[2719]: E1028 23:15:46.363815 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.363932 kubelet[2719]: E1028 23:15:46.363920 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.363932 kubelet[2719]: W1028 23:15:46.363930 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.363992 kubelet[2719]: E1028 23:15:46.363937 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.364076 kubelet[2719]: E1028 23:15:46.364060 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.364076 kubelet[2719]: W1028 23:15:46.364071 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.364118 kubelet[2719]: E1028 23:15:46.364080 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.416120 containerd[1571]: time="2025-10-28T23:15:46.415994568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75656ccf98-zttf4,Uid:766b5ceb-fc55-4ba7-88e4-07bdf94f194b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d1b365507538379f2a9867e1fa9360c2e1b1d66922226666b0cc9610d542bd7\"" Oct 28 23:15:46.417950 kubelet[2719]: E1028 23:15:46.417878 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:46.418943 containerd[1571]: time="2025-10-28T23:15:46.418918797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 28 23:15:46.431345 kubelet[2719]: E1028 23:15:46.431313 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:46.433040 containerd[1571]: time="2025-10-28T23:15:46.433004308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2rcch,Uid:6c4f8101-142d-4300-b32f-619b426999f4,Namespace:calico-system,Attempt:0,}" Oct 28 23:15:46.455011 containerd[1571]: time="2025-10-28T23:15:46.454949271Z" level=info msg="connecting to shim 7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2" address="unix:///run/containerd/s/7854fc5847537e1664c6833dee5420728a5957d7024619a3a5082fcd44120487" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:15:46.466875 kubelet[2719]: E1028 23:15:46.465600 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.466875 kubelet[2719]: W1028 23:15:46.465622 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.466875 kubelet[2719]: E1028 23:15:46.465643 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.467240 kubelet[2719]: E1028 23:15:46.467198 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.467240 kubelet[2719]: W1028 23:15:46.467216 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.467240 kubelet[2719]: E1028 23:15:46.467232 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.467526 kubelet[2719]: E1028 23:15:46.467507 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.467526 kubelet[2719]: W1028 23:15:46.467521 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.467587 kubelet[2719]: E1028 23:15:46.467532 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.468491 kubelet[2719]: E1028 23:15:46.468291 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.468491 kubelet[2719]: W1028 23:15:46.468312 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.468491 kubelet[2719]: E1028 23:15:46.468324 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.468592 kubelet[2719]: E1028 23:15:46.468574 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.468592 kubelet[2719]: W1028 23:15:46.468585 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.468633 kubelet[2719]: E1028 23:15:46.468595 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.469652 kubelet[2719]: E1028 23:15:46.469622 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.469652 kubelet[2719]: W1028 23:15:46.469639 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.469652 kubelet[2719]: E1028 23:15:46.469652 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.471487 kubelet[2719]: E1028 23:15:46.470514 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.471487 kubelet[2719]: W1028 23:15:46.470534 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.471487 kubelet[2719]: E1028 23:15:46.470546 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.471487 kubelet[2719]: E1028 23:15:46.471323 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.471487 kubelet[2719]: W1028 23:15:46.471336 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.471487 kubelet[2719]: E1028 23:15:46.471348 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.471647 kubelet[2719]: E1028 23:15:46.471589 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.471647 kubelet[2719]: W1028 23:15:46.471598 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.471647 kubelet[2719]: E1028 23:15:46.471607 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.473284 kubelet[2719]: E1028 23:15:46.473246 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.473284 kubelet[2719]: W1028 23:15:46.473269 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.473284 kubelet[2719]: E1028 23:15:46.473282 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.473601 kubelet[2719]: E1028 23:15:46.473570 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.473601 kubelet[2719]: W1028 23:15:46.473588 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.473654 kubelet[2719]: E1028 23:15:46.473605 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.475318 kubelet[2719]: E1028 23:15:46.475282 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.475318 kubelet[2719]: W1028 23:15:46.475306 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.475318 kubelet[2719]: E1028 23:15:46.475319 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.476062 kubelet[2719]: E1028 23:15:46.476028 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.476062 kubelet[2719]: W1028 23:15:46.476049 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.476062 kubelet[2719]: E1028 23:15:46.476061 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.479817 kubelet[2719]: E1028 23:15:46.479792 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.479817 kubelet[2719]: W1028 23:15:46.479810 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.479904 kubelet[2719]: E1028 23:15:46.479825 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.480163 kubelet[2719]: E1028 23:15:46.480132 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.480163 kubelet[2719]: W1028 23:15:46.480146 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.480163 kubelet[2719]: E1028 23:15:46.480158 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.480392 kubelet[2719]: E1028 23:15:46.480355 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.480392 kubelet[2719]: W1028 23:15:46.480368 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.480392 kubelet[2719]: E1028 23:15:46.480387 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.480722 kubelet[2719]: E1028 23:15:46.480689 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.480722 kubelet[2719]: W1028 23:15:46.480706 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.480796 kubelet[2719]: E1028 23:15:46.480720 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.481049 kubelet[2719]: E1028 23:15:46.481022 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.481049 kubelet[2719]: W1028 23:15:46.481034 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.481049 kubelet[2719]: E1028 23:15:46.481046 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.481441 kubelet[2719]: E1028 23:15:46.481237 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.481441 kubelet[2719]: W1028 23:15:46.481255 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.481441 kubelet[2719]: E1028 23:15:46.481265 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.481535 kubelet[2719]: E1028 23:15:46.481498 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.481535 kubelet[2719]: W1028 23:15:46.481508 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.481535 kubelet[2719]: E1028 23:15:46.481517 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.481769 kubelet[2719]: E1028 23:15:46.481741 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.481769 kubelet[2719]: W1028 23:15:46.481753 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.483457 kubelet[2719]: E1028 23:15:46.481763 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.483457 kubelet[2719]: E1028 23:15:46.482636 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.483457 kubelet[2719]: W1028 23:15:46.482646 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.483457 kubelet[2719]: E1028 23:15:46.482658 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.484640 kubelet[2719]: E1028 23:15:46.484607 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.485481 kubelet[2719]: W1028 23:15:46.485446 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.485481 kubelet[2719]: E1028 23:15:46.485476 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.485830 kubelet[2719]: E1028 23:15:46.485810 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.485830 kubelet[2719]: W1028 23:15:46.485827 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.485830 kubelet[2719]: E1028 23:15:46.485840 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.486505 kubelet[2719]: E1028 23:15:46.486484 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.486505 kubelet[2719]: W1028 23:15:46.486500 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.486577 kubelet[2719]: E1028 23:15:46.486514 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.491621 systemd[1]: Started cri-containerd-7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2.scope - libcontainer container 7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2. Oct 28 23:15:46.502609 kubelet[2719]: E1028 23:15:46.502009 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:46.502609 kubelet[2719]: W1028 23:15:46.502062 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:46.502609 kubelet[2719]: E1028 23:15:46.502080 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:46.530369 containerd[1571]: time="2025-10-28T23:15:46.530333167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2rcch,Uid:6c4f8101-142d-4300-b32f-619b426999f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2\"" Oct 28 23:15:46.532118 kubelet[2719]: E1028 23:15:46.532098 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:47.646139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111627921.mount: Deactivated successfully. Oct 28 23:15:47.704962 kubelet[2719]: E1028 23:15:47.704904 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4zdp" podUID="6b1cc5e3-6c35-4356-8831-57857e48a65e" Oct 28 23:15:48.049160 containerd[1571]: time="2025-10-28T23:15:48.049028170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:48.049702 containerd[1571]: time="2025-10-28T23:15:48.049655329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 28 23:15:48.050490 containerd[1571]: time="2025-10-28T23:15:48.050451966Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:48.052780 containerd[1571]: time="2025-10-28T23:15:48.052744399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:48.053369 containerd[1571]: time="2025-10-28T23:15:48.053322877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.63437236s" Oct 28 23:15:48.053400 containerd[1571]: time="2025-10-28T23:15:48.053357757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 28 23:15:48.054374 containerd[1571]: time="2025-10-28T23:15:48.054304674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 28 23:15:48.076883 containerd[1571]: time="2025-10-28T23:15:48.076845645Z" level=info msg="CreateContainer within sandbox \"0d1b365507538379f2a9867e1fa9360c2e1b1d66922226666b0cc9610d542bd7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 28 23:15:48.084462 containerd[1571]: time="2025-10-28T23:15:48.082942666Z" level=info msg="Container be6350b93ba668d1db9916a1ae852c35e9f24f15f0665ff8fff582e3798bd8d7: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:15:48.089162 containerd[1571]: time="2025-10-28T23:15:48.089119367Z" level=info msg="CreateContainer within sandbox \"0d1b365507538379f2a9867e1fa9360c2e1b1d66922226666b0cc9610d542bd7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"be6350b93ba668d1db9916a1ae852c35e9f24f15f0665ff8fff582e3798bd8d7\"" Oct 28 23:15:48.089667 containerd[1571]: time="2025-10-28T23:15:48.089621046Z" level=info msg="StartContainer for \"be6350b93ba668d1db9916a1ae852c35e9f24f15f0665ff8fff582e3798bd8d7\"" Oct 28 23:15:48.090726 containerd[1571]: time="2025-10-28T23:15:48.090691882Z" level=info msg="connecting to shim be6350b93ba668d1db9916a1ae852c35e9f24f15f0665ff8fff582e3798bd8d7" address="unix:///run/containerd/s/9f3a35bcf2898901fdd9c59155f67a6afdbe3e5b91b00af2e300bb5f95e174f9" protocol=ttrpc version=3 Oct 28 23:15:48.128604 systemd[1]: Started cri-containerd-be6350b93ba668d1db9916a1ae852c35e9f24f15f0665ff8fff582e3798bd8d7.scope - libcontainer container be6350b93ba668d1db9916a1ae852c35e9f24f15f0665ff8fff582e3798bd8d7. Oct 28 23:15:48.163599 containerd[1571]: time="2025-10-28T23:15:48.163498178Z" level=info msg="StartContainer for \"be6350b93ba668d1db9916a1ae852c35e9f24f15f0665ff8fff582e3798bd8d7\" returns successfully" Oct 28 23:15:48.816828 kubelet[2719]: E1028 23:15:48.816798 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:48.826907 kubelet[2719]: I1028 23:15:48.826829 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75656ccf98-zttf4" podStartSLOduration=2.19140762 podStartE2EDuration="3.826813657s" podCreationTimestamp="2025-10-28 23:15:45 +0000 UTC" firstStartedPulling="2025-10-28 23:15:46.418720758 +0000 UTC m=+24.797683945" lastFinishedPulling="2025-10-28 23:15:48.054126835 +0000 UTC m=+26.433089982" observedRunningTime="2025-10-28 23:15:48.826589738 +0000 UTC m=+27.205552925" watchObservedRunningTime="2025-10-28 23:15:48.826813657 +0000 UTC m=+27.205776844" Oct 28 23:15:48.870992 kubelet[2719]: E1028 23:15:48.870959 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.871195 kubelet[2719]: W1028 23:15:48.871120 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.871195 kubelet[2719]: E1028 23:15:48.871146 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.871514 kubelet[2719]: E1028 23:15:48.871469 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.871604 kubelet[2719]: W1028 23:15:48.871482 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.871705 kubelet[2719]: E1028 23:15:48.871661 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.871925 kubelet[2719]: E1028 23:15:48.871914 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.871994 kubelet[2719]: W1028 23:15:48.871982 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.872057 kubelet[2719]: E1028 23:15:48.872038 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.872362 kubelet[2719]: E1028 23:15:48.872304 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.872362 kubelet[2719]: W1028 23:15:48.872316 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.872362 kubelet[2719]: E1028 23:15:48.872327 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.872726 kubelet[2719]: E1028 23:15:48.872675 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.872726 kubelet[2719]: W1028 23:15:48.872688 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.872726 kubelet[2719]: E1028 23:15:48.872700 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.872994 kubelet[2719]: E1028 23:15:48.872980 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.873059 kubelet[2719]: W1028 23:15:48.873047 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.873125 kubelet[2719]: E1028 23:15:48.873114 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.873369 kubelet[2719]: E1028 23:15:48.873331 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.873369 kubelet[2719]: W1028 23:15:48.873342 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.873588 kubelet[2719]: E1028 23:15:48.873351 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.873702 kubelet[2719]: E1028 23:15:48.873692 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.873778 kubelet[2719]: W1028 23:15:48.873767 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.873843 kubelet[2719]: E1028 23:15:48.873821 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.874066 kubelet[2719]: E1028 23:15:48.874056 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.874066 kubelet[2719]: W1028 23:15:48.874082 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.874066 kubelet[2719]: E1028 23:15:48.874094 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.874386 kubelet[2719]: E1028 23:15:48.874345 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.874386 kubelet[2719]: W1028 23:15:48.874366 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.874497 kubelet[2719]: E1028 23:15:48.874486 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.874719 kubelet[2719]: E1028 23:15:48.874702 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.874783 kubelet[2719]: W1028 23:15:48.874773 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.874855 kubelet[2719]: E1028 23:15:48.874832 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.875065 kubelet[2719]: E1028 23:15:48.875054 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.875190 kubelet[2719]: W1028 23:15:48.875134 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.875190 kubelet[2719]: E1028 23:15:48.875149 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.875450 kubelet[2719]: E1028 23:15:48.875407 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.875514 kubelet[2719]: W1028 23:15:48.875420 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.875578 kubelet[2719]: E1028 23:15:48.875565 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.875787 kubelet[2719]: E1028 23:15:48.875776 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.875892 kubelet[2719]: W1028 23:15:48.875837 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.875892 kubelet[2719]: E1028 23:15:48.875852 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.876108 kubelet[2719]: E1028 23:15:48.876098 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.876224 kubelet[2719]: W1028 23:15:48.876159 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.876224 kubelet[2719]: E1028 23:15:48.876175 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.890572 kubelet[2719]: E1028 23:15:48.890550 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.890572 kubelet[2719]: W1028 23:15:48.890570 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.890652 kubelet[2719]: E1028 23:15:48.890585 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.890798 kubelet[2719]: E1028 23:15:48.890785 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.890798 kubelet[2719]: W1028 23:15:48.890796 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.890941 kubelet[2719]: E1028 23:15:48.890805 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.890990 kubelet[2719]: E1028 23:15:48.890957 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.890990 kubelet[2719]: W1028 23:15:48.890963 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.890990 kubelet[2719]: E1028 23:15:48.890971 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.891207 kubelet[2719]: E1028 23:15:48.891190 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.891245 kubelet[2719]: W1028 23:15:48.891207 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.891245 kubelet[2719]: E1028 23:15:48.891222 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.891384 kubelet[2719]: E1028 23:15:48.891371 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.891384 kubelet[2719]: W1028 23:15:48.891382 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.891470 kubelet[2719]: E1028 23:15:48.891391 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.891560 kubelet[2719]: E1028 23:15:48.891548 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.891560 kubelet[2719]: W1028 23:15:48.891559 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.891619 kubelet[2719]: E1028 23:15:48.891569 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.891730 kubelet[2719]: E1028 23:15:48.891719 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.891730 kubelet[2719]: W1028 23:15:48.891729 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.891777 kubelet[2719]: E1028 23:15:48.891737 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.892016 kubelet[2719]: E1028 23:15:48.892001 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.892049 kubelet[2719]: W1028 23:15:48.892018 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.892049 kubelet[2719]: E1028 23:15:48.892030 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.892180 kubelet[2719]: E1028 23:15:48.892171 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.892180 kubelet[2719]: W1028 23:15:48.892180 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.892222 kubelet[2719]: E1028 23:15:48.892187 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.892338 kubelet[2719]: E1028 23:15:48.892329 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.892338 kubelet[2719]: W1028 23:15:48.892338 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.892410 kubelet[2719]: E1028 23:15:48.892346 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.892504 kubelet[2719]: E1028 23:15:48.892495 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.892504 kubelet[2719]: W1028 23:15:48.892504 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.892584 kubelet[2719]: E1028 23:15:48.892513 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.892669 kubelet[2719]: E1028 23:15:48.892659 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.892669 kubelet[2719]: W1028 23:15:48.892669 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.892722 kubelet[2719]: E1028 23:15:48.892676 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.892827 kubelet[2719]: E1028 23:15:48.892816 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.892827 kubelet[2719]: W1028 23:15:48.892826 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.892900 kubelet[2719]: E1028 23:15:48.892835 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.893175 kubelet[2719]: E1028 23:15:48.893122 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.893175 kubelet[2719]: W1028 23:15:48.893137 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.893175 kubelet[2719]: E1028 23:15:48.893148 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.893527 kubelet[2719]: E1028 23:15:48.893418 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.893527 kubelet[2719]: W1028 23:15:48.893453 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.893527 kubelet[2719]: E1028 23:15:48.893464 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.893771 kubelet[2719]: E1028 23:15:48.893759 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.893837 kubelet[2719]: W1028 23:15:48.893826 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.893900 kubelet[2719]: E1028 23:15:48.893889 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.894178 kubelet[2719]: E1028 23:15:48.894163 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.894178 kubelet[2719]: W1028 23:15:48.894176 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.894262 kubelet[2719]: E1028 23:15:48.894186 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:48.894337 kubelet[2719]: E1028 23:15:48.894325 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:15:48.894337 kubelet[2719]: W1028 23:15:48.894335 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:15:48.894408 kubelet[2719]: E1028 23:15:48.894343 2719 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:15:49.053802 containerd[1571]: time="2025-10-28T23:15:49.053739170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:49.057645 containerd[1571]: time="2025-10-28T23:15:49.057601798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 28 23:15:49.058486 containerd[1571]: time="2025-10-28T23:15:49.058459276Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:49.060504 containerd[1571]: time="2025-10-28T23:15:49.060263871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:49.061269 containerd[1571]: time="2025-10-28T23:15:49.061212308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.006882194s" Oct 28 23:15:49.061269 containerd[1571]: time="2025-10-28T23:15:49.061248628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 28 23:15:49.066670 containerd[1571]: time="2025-10-28T23:15:49.066634092Z" level=info msg="CreateContainer within sandbox \"7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 28 23:15:49.073501 containerd[1571]: time="2025-10-28T23:15:49.072626155Z" level=info msg="Container c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:15:49.080830 containerd[1571]: time="2025-10-28T23:15:49.080796372Z" level=info msg="CreateContainer within sandbox \"7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40\"" Oct 28 23:15:49.081623 containerd[1571]: time="2025-10-28T23:15:49.081596249Z" level=info msg="StartContainer for \"c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40\"" Oct 28 23:15:49.082986 containerd[1571]: time="2025-10-28T23:15:49.082961405Z" level=info msg="connecting to shim c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40" address="unix:///run/containerd/s/7854fc5847537e1664c6833dee5420728a5957d7024619a3a5082fcd44120487" protocol=ttrpc version=3 Oct 28 23:15:49.108611 systemd[1]: Started cri-containerd-c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40.scope - libcontainer container c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40. Oct 28 23:15:49.155113 systemd[1]: cri-containerd-c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40.scope: Deactivated successfully. Oct 28 23:15:49.160774 containerd[1571]: time="2025-10-28T23:15:49.160713901Z" level=info msg="StartContainer for \"c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40\" returns successfully" Oct 28 23:15:49.182245 containerd[1571]: time="2025-10-28T23:15:49.182185519Z" level=info msg="received exit event container_id:\"c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40\" id:\"c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40\" pid:3428 exited_at:{seconds:1761693349 nanos:177670372}" Oct 28 23:15:49.182515 containerd[1571]: time="2025-10-28T23:15:49.182387158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40\" id:\"c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40\" pid:3428 exited_at:{seconds:1761693349 nanos:177670372}" Oct 28 23:15:49.223071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c887c7419b1cf2b1ed12238a325f5401193a073b35f6d8d4aa66ed1dd037bf40-rootfs.mount: Deactivated successfully. Oct 28 23:15:49.704068 kubelet[2719]: E1028 23:15:49.704024 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4zdp" podUID="6b1cc5e3-6c35-4356-8831-57857e48a65e" Oct 28 23:15:49.820632 kubelet[2719]: I1028 23:15:49.820604 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 23:15:49.822254 kubelet[2719]: E1028 23:15:49.820881 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:49.822254 kubelet[2719]: E1028 23:15:49.820970 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:49.822305 containerd[1571]: time="2025-10-28T23:15:49.821670715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 28 23:15:51.704222 kubelet[2719]: E1028 23:15:51.703982 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n4zdp" podUID="6b1cc5e3-6c35-4356-8831-57857e48a65e" Oct 28 23:15:52.572411 containerd[1571]: time="2025-10-28T23:15:52.572354920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:52.572999 containerd[1571]: time="2025-10-28T23:15:52.572970079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 28 23:15:52.573952 containerd[1571]: time="2025-10-28T23:15:52.573902557Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:52.576148 containerd[1571]: time="2025-10-28T23:15:52.575879032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:52.576564 containerd[1571]: time="2025-10-28T23:15:52.576540870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.754833117s" Oct 28 23:15:52.576655 containerd[1571]: time="2025-10-28T23:15:52.576638230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 28 23:15:52.580697 containerd[1571]: time="2025-10-28T23:15:52.580665701Z" level=info msg="CreateContainer within sandbox \"7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 28 23:15:52.589171 containerd[1571]: time="2025-10-28T23:15:52.588038763Z" level=info msg="Container c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:15:52.597251 containerd[1571]: time="2025-10-28T23:15:52.597196421Z" level=info msg="CreateContainer within sandbox \"7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7\"" Oct 28 23:15:52.597741 containerd[1571]: time="2025-10-28T23:15:52.597714820Z" level=info msg="StartContainer for \"c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7\"" Oct 28 23:15:52.599571 containerd[1571]: time="2025-10-28T23:15:52.599467016Z" level=info msg="connecting to shim c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7" address="unix:///run/containerd/s/7854fc5847537e1664c6833dee5420728a5957d7024619a3a5082fcd44120487" protocol=ttrpc version=3 Oct 28 23:15:52.631636 systemd[1]: Started cri-containerd-c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7.scope - libcontainer container c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7. Oct 28 23:15:52.674945 containerd[1571]: time="2025-10-28T23:15:52.674818077Z" level=info msg="StartContainer for \"c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7\" returns successfully" Oct 28 23:15:52.831375 kubelet[2719]: E1028 23:15:52.831255 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:53.148653 systemd[1]: cri-containerd-c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7.scope: Deactivated successfully. Oct 28 23:15:53.148916 systemd[1]: cri-containerd-c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7.scope: Consumed 443ms CPU time, 174.1M memory peak, 2.2M read from disk, 165.9M written to disk. Oct 28 23:15:53.150186 containerd[1571]: time="2025-10-28T23:15:53.150155889Z" level=info msg="received exit event container_id:\"c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7\" id:\"c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7\" pid:3491 exited_at:{seconds:1761693353 nanos:149925050}" Oct 28 23:15:53.150406 containerd[1571]: time="2025-10-28T23:15:53.150253529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7\" id:\"c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7\" pid:3491 exited_at:{seconds:1761693353 nanos:149925050}" Oct 28 23:15:53.164104 kubelet[2719]: I1028 23:15:53.163953 2719 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 28 23:15:53.176674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2fef7a4c53e26ae984732015cfbaad6ed6233cfd3c643ff571e3aa763f687b7-rootfs.mount: Deactivated successfully. Oct 28 23:15:53.282868 systemd[1]: Created slice kubepods-burstable-podeddf9348_9a5f_4ac7_b557_fae5d1e3fcff.slice - libcontainer container kubepods-burstable-podeddf9348_9a5f_4ac7_b557_fae5d1e3fcff.slice. Oct 28 23:15:53.324165 kubelet[2719]: I1028 23:15:53.324071 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eddf9348-9a5f-4ac7-b557-fae5d1e3fcff-config-volume\") pod \"coredns-66bc5c9577-qtgwt\" (UID: \"eddf9348-9a5f-4ac7-b557-fae5d1e3fcff\") " pod="kube-system/coredns-66bc5c9577-qtgwt" Oct 28 23:15:53.324165 kubelet[2719]: I1028 23:15:53.324116 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4h9t\" (UniqueName: \"kubernetes.io/projected/eddf9348-9a5f-4ac7-b557-fae5d1e3fcff-kube-api-access-b4h9t\") pod \"coredns-66bc5c9577-qtgwt\" (UID: \"eddf9348-9a5f-4ac7-b557-fae5d1e3fcff\") " pod="kube-system/coredns-66bc5c9577-qtgwt" Oct 28 23:15:53.363054 systemd[1]: Created slice kubepods-besteffort-pod84a92fac_14cc_4b8a_a065_7ef0df05e34f.slice - libcontainer container kubepods-besteffort-pod84a92fac_14cc_4b8a_a065_7ef0df05e34f.slice. Oct 28 23:15:53.376137 systemd[1]: Created slice kubepods-besteffort-pod1a857c78_358a_4098_86ca_4d159e537b48.slice - libcontainer container kubepods-besteffort-pod1a857c78_358a_4098_86ca_4d159e537b48.slice. Oct 28 23:15:53.381962 systemd[1]: Created slice kubepods-besteffort-pode693546f_22c9_4f3e_b82e_1c2bd8d6de81.slice - libcontainer container kubepods-besteffort-pode693546f_22c9_4f3e_b82e_1c2bd8d6de81.slice. Oct 28 23:15:53.385719 systemd[1]: Created slice kubepods-besteffort-pod03d12470_929a_414d_b9fc_0eb2e9388b7a.slice - libcontainer container kubepods-besteffort-pod03d12470_929a_414d_b9fc_0eb2e9388b7a.slice. Oct 28 23:15:53.392517 systemd[1]: Created slice kubepods-besteffort-pod573d9c02_82cb_4bf8_9f40_79127dc42465.slice - libcontainer container kubepods-besteffort-pod573d9c02_82cb_4bf8_9f40_79127dc42465.slice. Oct 28 23:15:53.398675 systemd[1]: Created slice kubepods-burstable-pod5d6d39cf_8a2a_454d_9ef8_fd471b116726.slice - libcontainer container kubepods-burstable-pod5d6d39cf_8a2a_454d_9ef8_fd471b116726.slice. Oct 28 23:15:53.425358 kubelet[2719]: I1028 23:15:53.425233 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/573d9c02-82cb-4bf8-9f40-79127dc42465-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-bn5ff\" (UID: \"573d9c02-82cb-4bf8-9f40-79127dc42465\") " pod="calico-system/goldmane-7c778bb748-bn5ff" Oct 28 23:15:53.425358 kubelet[2719]: I1028 23:15:53.425281 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a857c78-358a-4098-86ca-4d159e537b48-whisker-backend-key-pair\") pod \"whisker-f4bf8d8b-mf67l\" (UID: \"1a857c78-358a-4098-86ca-4d159e537b48\") " pod="calico-system/whisker-f4bf8d8b-mf67l" Oct 28 23:15:53.425358 kubelet[2719]: I1028 23:15:53.425296 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j22ff\" (UniqueName: \"kubernetes.io/projected/5d6d39cf-8a2a-454d-9ef8-fd471b116726-kube-api-access-j22ff\") pod \"coredns-66bc5c9577-q9hnr\" (UID: \"5d6d39cf-8a2a-454d-9ef8-fd471b116726\") " pod="kube-system/coredns-66bc5c9577-q9hnr" Oct 28 23:15:53.425358 kubelet[2719]: I1028 23:15:53.425322 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw55p\" (UniqueName: \"kubernetes.io/projected/e693546f-22c9-4f3e-b82e-1c2bd8d6de81-kube-api-access-bw55p\") pod \"calico-apiserver-8dd56995c-k8qt2\" (UID: \"e693546f-22c9-4f3e-b82e-1c2bd8d6de81\") " pod="calico-apiserver/calico-apiserver-8dd56995c-k8qt2" Oct 28 23:15:53.425358 kubelet[2719]: I1028 23:15:53.425341 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84a92fac-14cc-4b8a-a065-7ef0df05e34f-calico-apiserver-certs\") pod \"calico-apiserver-8dd56995c-pndrt\" (UID: \"84a92fac-14cc-4b8a-a065-7ef0df05e34f\") " pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" Oct 28 23:15:53.425573 kubelet[2719]: I1028 23:15:53.425359 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a857c78-358a-4098-86ca-4d159e537b48-whisker-ca-bundle\") pod \"whisker-f4bf8d8b-mf67l\" (UID: \"1a857c78-358a-4098-86ca-4d159e537b48\") " pod="calico-system/whisker-f4bf8d8b-mf67l" Oct 28 23:15:53.425573 kubelet[2719]: I1028 23:15:53.425374 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945rk\" (UniqueName: \"kubernetes.io/projected/573d9c02-82cb-4bf8-9f40-79127dc42465-kube-api-access-945rk\") pod \"goldmane-7c778bb748-bn5ff\" (UID: \"573d9c02-82cb-4bf8-9f40-79127dc42465\") " pod="calico-system/goldmane-7c778bb748-bn5ff" Oct 28 23:15:53.425573 kubelet[2719]: I1028 23:15:53.425388 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d6d39cf-8a2a-454d-9ef8-fd471b116726-config-volume\") pod \"coredns-66bc5c9577-q9hnr\" (UID: \"5d6d39cf-8a2a-454d-9ef8-fd471b116726\") " pod="kube-system/coredns-66bc5c9577-q9hnr" Oct 28 23:15:53.425573 kubelet[2719]: I1028 23:15:53.425448 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e693546f-22c9-4f3e-b82e-1c2bd8d6de81-calico-apiserver-certs\") pod \"calico-apiserver-8dd56995c-k8qt2\" (UID: \"e693546f-22c9-4f3e-b82e-1c2bd8d6de81\") " pod="calico-apiserver/calico-apiserver-8dd56995c-k8qt2" Oct 28 23:15:53.425573 kubelet[2719]: I1028 23:15:53.425466 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d12470-929a-414d-b9fc-0eb2e9388b7a-tigera-ca-bundle\") pod \"calico-kube-controllers-cf76cbd9f-lbvzw\" (UID: \"03d12470-929a-414d-b9fc-0eb2e9388b7a\") " pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" Oct 28 23:15:53.425679 kubelet[2719]: I1028 23:15:53.425483 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7f86\" (UniqueName: \"kubernetes.io/projected/84a92fac-14cc-4b8a-a065-7ef0df05e34f-kube-api-access-p7f86\") pod \"calico-apiserver-8dd56995c-pndrt\" (UID: \"84a92fac-14cc-4b8a-a065-7ef0df05e34f\") " pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" Oct 28 23:15:53.425679 kubelet[2719]: I1028 23:15:53.425496 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/573d9c02-82cb-4bf8-9f40-79127dc42465-config\") pod \"goldmane-7c778bb748-bn5ff\" (UID: \"573d9c02-82cb-4bf8-9f40-79127dc42465\") " pod="calico-system/goldmane-7c778bb748-bn5ff" Oct 28 23:15:53.425679 kubelet[2719]: I1028 23:15:53.425510 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/573d9c02-82cb-4bf8-9f40-79127dc42465-goldmane-key-pair\") pod \"goldmane-7c778bb748-bn5ff\" (UID: \"573d9c02-82cb-4bf8-9f40-79127dc42465\") " pod="calico-system/goldmane-7c778bb748-bn5ff" Oct 28 23:15:53.425679 kubelet[2719]: I1028 23:15:53.425530 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8shgk\" (UniqueName: \"kubernetes.io/projected/1a857c78-358a-4098-86ca-4d159e537b48-kube-api-access-8shgk\") pod \"whisker-f4bf8d8b-mf67l\" (UID: \"1a857c78-358a-4098-86ca-4d159e537b48\") " pod="calico-system/whisker-f4bf8d8b-mf67l" Oct 28 23:15:53.425679 kubelet[2719]: I1028 23:15:53.425546 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6gdb\" (UniqueName: \"kubernetes.io/projected/03d12470-929a-414d-b9fc-0eb2e9388b7a-kube-api-access-p6gdb\") pod \"calico-kube-controllers-cf76cbd9f-lbvzw\" (UID: \"03d12470-929a-414d-b9fc-0eb2e9388b7a\") " pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" Oct 28 23:15:53.589785 kubelet[2719]: E1028 23:15:53.589703 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:53.590691 containerd[1571]: time="2025-10-28T23:15:53.590383029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qtgwt,Uid:eddf9348-9a5f-4ac7-b557-fae5d1e3fcff,Namespace:kube-system,Attempt:0,}" Oct 28 23:15:53.667961 containerd[1571]: time="2025-10-28T23:15:53.667921616Z" level=error msg="Failed to destroy network for sandbox \"e47cdc3b22186f447882a84eecbc8c599509f18b56f07376e59b761813d53b52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.669887 containerd[1571]: time="2025-10-28T23:15:53.669815332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qtgwt,Uid:eddf9348-9a5f-4ac7-b557-fae5d1e3fcff,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47cdc3b22186f447882a84eecbc8c599509f18b56f07376e59b761813d53b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.670141 kubelet[2719]: E1028 23:15:53.670006 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47cdc3b22186f447882a84eecbc8c599509f18b56f07376e59b761813d53b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.670141 kubelet[2719]: E1028 23:15:53.670057 2719 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47cdc3b22186f447882a84eecbc8c599509f18b56f07376e59b761813d53b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qtgwt" Oct 28 23:15:53.670141 kubelet[2719]: E1028 23:15:53.670075 2719 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47cdc3b22186f447882a84eecbc8c599509f18b56f07376e59b761813d53b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qtgwt" Oct 28 23:15:53.670255 kubelet[2719]: E1028 23:15:53.670119 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qtgwt_kube-system(eddf9348-9a5f-4ac7-b557-fae5d1e3fcff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qtgwt_kube-system(eddf9348-9a5f-4ac7-b557-fae5d1e3fcff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e47cdc3b22186f447882a84eecbc8c599509f18b56f07376e59b761813d53b52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qtgwt" podUID="eddf9348-9a5f-4ac7-b557-fae5d1e3fcff" Oct 28 23:15:53.670383 systemd[1]: run-netns-cni\x2d57cf922a\x2d22a2\x2d255d\x2d15e3\x2d0b89a63381ef.mount: Deactivated successfully. Oct 28 23:15:53.671293 containerd[1571]: time="2025-10-28T23:15:53.670545930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8dd56995c-pndrt,Uid:84a92fac-14cc-4b8a-a065-7ef0df05e34f,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:15:53.685007 containerd[1571]: time="2025-10-28T23:15:53.684745778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f4bf8d8b-mf67l,Uid:1a857c78-358a-4098-86ca-4d159e537b48,Namespace:calico-system,Attempt:0,}" Oct 28 23:15:53.685762 containerd[1571]: time="2025-10-28T23:15:53.685728376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8dd56995c-k8qt2,Uid:e693546f-22c9-4f3e-b82e-1c2bd8d6de81,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:15:53.691966 containerd[1571]: time="2025-10-28T23:15:53.691920602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf76cbd9f-lbvzw,Uid:03d12470-929a-414d-b9fc-0eb2e9388b7a,Namespace:calico-system,Attempt:0,}" Oct 28 23:15:53.698374 containerd[1571]: time="2025-10-28T23:15:53.698338028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bn5ff,Uid:573d9c02-82cb-4bf8-9f40-79127dc42465,Namespace:calico-system,Attempt:0,}" Oct 28 23:15:53.704778 kubelet[2719]: E1028 23:15:53.704269 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:53.704919 containerd[1571]: time="2025-10-28T23:15:53.704712134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q9hnr,Uid:5d6d39cf-8a2a-454d-9ef8-fd471b116726,Namespace:kube-system,Attempt:0,}" Oct 28 23:15:53.711356 systemd[1]: Created slice kubepods-besteffort-pod6b1cc5e3_6c35_4356_8831_57857e48a65e.slice - libcontainer container kubepods-besteffort-pod6b1cc5e3_6c35_4356_8831_57857e48a65e.slice. Oct 28 23:15:53.719165 containerd[1571]: time="2025-10-28T23:15:53.719131942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n4zdp,Uid:6b1cc5e3-6c35-4356-8831-57857e48a65e,Namespace:calico-system,Attempt:0,}" Oct 28 23:15:53.748682 containerd[1571]: time="2025-10-28T23:15:53.748636036Z" level=error msg="Failed to destroy network for sandbox \"f4ba7b3d08542c9945fa4a1802852d46f303887cb3ddd1846e8ab2a0afe4349a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.750690 containerd[1571]: time="2025-10-28T23:15:53.750555992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8dd56995c-pndrt,Uid:84a92fac-14cc-4b8a-a065-7ef0df05e34f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ba7b3d08542c9945fa4a1802852d46f303887cb3ddd1846e8ab2a0afe4349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.750910 kubelet[2719]: E1028 23:15:53.750795 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ba7b3d08542c9945fa4a1802852d46f303887cb3ddd1846e8ab2a0afe4349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.750910 kubelet[2719]: E1028 23:15:53.750882 2719 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ba7b3d08542c9945fa4a1802852d46f303887cb3ddd1846e8ab2a0afe4349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" Oct 28 23:15:53.750910 kubelet[2719]: E1028 23:15:53.750900 2719 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4ba7b3d08542c9945fa4a1802852d46f303887cb3ddd1846e8ab2a0afe4349a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" Oct 28 23:15:53.751032 kubelet[2719]: E1028 23:15:53.750953 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8dd56995c-pndrt_calico-apiserver(84a92fac-14cc-4b8a-a065-7ef0df05e34f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8dd56995c-pndrt_calico-apiserver(84a92fac-14cc-4b8a-a065-7ef0df05e34f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4ba7b3d08542c9945fa4a1802852d46f303887cb3ddd1846e8ab2a0afe4349a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" podUID="84a92fac-14cc-4b8a-a065-7ef0df05e34f" Oct 28 23:15:53.778997 containerd[1571]: time="2025-10-28T23:15:53.778921888Z" level=error msg="Failed to destroy network for sandbox \"eb565cc277842d939ef95750fd2e2dcc0d1425d0f8d60ec8dd6f225f30b00ded\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.780077 containerd[1571]: time="2025-10-28T23:15:53.780031646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f4bf8d8b-mf67l,Uid:1a857c78-358a-4098-86ca-4d159e537b48,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb565cc277842d939ef95750fd2e2dcc0d1425d0f8d60ec8dd6f225f30b00ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.780377 kubelet[2719]: E1028 23:15:53.780339 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb565cc277842d939ef95750fd2e2dcc0d1425d0f8d60ec8dd6f225f30b00ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.781458 kubelet[2719]: E1028 23:15:53.780499 2719 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb565cc277842d939ef95750fd2e2dcc0d1425d0f8d60ec8dd6f225f30b00ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f4bf8d8b-mf67l" Oct 28 23:15:53.781458 kubelet[2719]: E1028 23:15:53.780525 2719 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb565cc277842d939ef95750fd2e2dcc0d1425d0f8d60ec8dd6f225f30b00ded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f4bf8d8b-mf67l" Oct 28 23:15:53.781458 kubelet[2719]: E1028 23:15:53.780584 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f4bf8d8b-mf67l_calico-system(1a857c78-358a-4098-86ca-4d159e537b48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f4bf8d8b-mf67l_calico-system(1a857c78-358a-4098-86ca-4d159e537b48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb565cc277842d939ef95750fd2e2dcc0d1425d0f8d60ec8dd6f225f30b00ded\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f4bf8d8b-mf67l" podUID="1a857c78-358a-4098-86ca-4d159e537b48" Oct 28 23:15:53.781640 containerd[1571]: time="2025-10-28T23:15:53.781587723Z" level=error msg="Failed to destroy network for sandbox \"17ea3e96cbab4c827e510574b80da1cf794e01d38e43b14416539de8f440b5ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.783657 containerd[1571]: time="2025-10-28T23:15:53.783565198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8dd56995c-k8qt2,Uid:e693546f-22c9-4f3e-b82e-1c2bd8d6de81,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17ea3e96cbab4c827e510574b80da1cf794e01d38e43b14416539de8f440b5ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.783965 kubelet[2719]: E1028 23:15:53.783766 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17ea3e96cbab4c827e510574b80da1cf794e01d38e43b14416539de8f440b5ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.783965 kubelet[2719]: E1028 23:15:53.783807 2719 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17ea3e96cbab4c827e510574b80da1cf794e01d38e43b14416539de8f440b5ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8dd56995c-k8qt2" Oct 28 23:15:53.783965 kubelet[2719]: E1028 23:15:53.783824 2719 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17ea3e96cbab4c827e510574b80da1cf794e01d38e43b14416539de8f440b5ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8dd56995c-k8qt2" Oct 28 23:15:53.784108 kubelet[2719]: E1028 23:15:53.783864 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8dd56995c-k8qt2_calico-apiserver(e693546f-22c9-4f3e-b82e-1c2bd8d6de81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8dd56995c-k8qt2_calico-apiserver(e693546f-22c9-4f3e-b82e-1c2bd8d6de81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17ea3e96cbab4c827e510574b80da1cf794e01d38e43b14416539de8f440b5ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8dd56995c-k8qt2" podUID="e693546f-22c9-4f3e-b82e-1c2bd8d6de81" Oct 28 23:15:53.799594 containerd[1571]: time="2025-10-28T23:15:53.799549523Z" level=error msg="Failed to destroy network for sandbox \"af6cfbda592f0d485828870b3375beeddc69c2ac2ed0373e7d012b211787becd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.800783 containerd[1571]: time="2025-10-28T23:15:53.800722200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf76cbd9f-lbvzw,Uid:03d12470-929a-414d-b9fc-0eb2e9388b7a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af6cfbda592f0d485828870b3375beeddc69c2ac2ed0373e7d012b211787becd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.801044 kubelet[2719]: E1028 23:15:53.801012 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af6cfbda592f0d485828870b3375beeddc69c2ac2ed0373e7d012b211787becd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.801100 kubelet[2719]: E1028 23:15:53.801061 2719 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af6cfbda592f0d485828870b3375beeddc69c2ac2ed0373e7d012b211787becd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" Oct 28 23:15:53.801100 kubelet[2719]: E1028 23:15:53.801079 2719 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af6cfbda592f0d485828870b3375beeddc69c2ac2ed0373e7d012b211787becd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" Oct 28 23:15:53.801164 kubelet[2719]: E1028 23:15:53.801140 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cf76cbd9f-lbvzw_calico-system(03d12470-929a-414d-b9fc-0eb2e9388b7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cf76cbd9f-lbvzw_calico-system(03d12470-929a-414d-b9fc-0eb2e9388b7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af6cfbda592f0d485828870b3375beeddc69c2ac2ed0373e7d012b211787becd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" podUID="03d12470-929a-414d-b9fc-0eb2e9388b7a" Oct 28 23:15:53.804389 containerd[1571]: time="2025-10-28T23:15:53.804358312Z" level=error msg="Failed to destroy network for sandbox \"6c319203f32b31d103dc554ead3a0b96e14ae982144c6328222969fc9bef6724\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.804487 containerd[1571]: time="2025-10-28T23:15:53.804371752Z" level=error msg="Failed to destroy network for sandbox \"e74118f71e25e917b0530b8c4142381e16a81289bb2d0468642dee10c1f6ecd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.805449 containerd[1571]: time="2025-10-28T23:15:53.805396509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q9hnr,Uid:5d6d39cf-8a2a-454d-9ef8-fd471b116726,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e74118f71e25e917b0530b8c4142381e16a81289bb2d0468642dee10c1f6ecd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.805633 kubelet[2719]: E1028 23:15:53.805602 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e74118f71e25e917b0530b8c4142381e16a81289bb2d0468642dee10c1f6ecd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.805684 kubelet[2719]: E1028 23:15:53.805645 2719 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e74118f71e25e917b0530b8c4142381e16a81289bb2d0468642dee10c1f6ecd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-q9hnr" Oct 28 23:15:53.805684 kubelet[2719]: E1028 23:15:53.805663 2719 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e74118f71e25e917b0530b8c4142381e16a81289bb2d0468642dee10c1f6ecd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-q9hnr" Oct 28 23:15:53.805739 kubelet[2719]: E1028 23:15:53.805708 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-q9hnr_kube-system(5d6d39cf-8a2a-454d-9ef8-fd471b116726)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-q9hnr_kube-system(5d6d39cf-8a2a-454d-9ef8-fd471b116726)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e74118f71e25e917b0530b8c4142381e16a81289bb2d0468642dee10c1f6ecd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-q9hnr" podUID="5d6d39cf-8a2a-454d-9ef8-fd471b116726" Oct 28 23:15:53.806191 containerd[1571]: time="2025-10-28T23:15:53.806150988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n4zdp,Uid:6b1cc5e3-6c35-4356-8831-57857e48a65e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c319203f32b31d103dc554ead3a0b96e14ae982144c6328222969fc9bef6724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.807274 kubelet[2719]: E1028 23:15:53.807230 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c319203f32b31d103dc554ead3a0b96e14ae982144c6328222969fc9bef6724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.807462 kubelet[2719]: E1028 23:15:53.807278 2719 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c319203f32b31d103dc554ead3a0b96e14ae982144c6328222969fc9bef6724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n4zdp" Oct 28 23:15:53.807462 kubelet[2719]: E1028 23:15:53.807298 2719 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c319203f32b31d103dc554ead3a0b96e14ae982144c6328222969fc9bef6724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n4zdp" Oct 28 23:15:53.807462 kubelet[2719]: E1028 23:15:53.807348 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n4zdp_calico-system(6b1cc5e3-6c35-4356-8831-57857e48a65e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n4zdp_calico-system(6b1cc5e3-6c35-4356-8831-57857e48a65e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c319203f32b31d103dc554ead3a0b96e14ae982144c6328222969fc9bef6724\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n4zdp" podUID="6b1cc5e3-6c35-4356-8831-57857e48a65e" Oct 28 23:15:53.812505 containerd[1571]: time="2025-10-28T23:15:53.812474814Z" level=error msg="Failed to destroy network for sandbox \"4dbcc5e47e0d388a1f796536547666885f47dabab523ee2c13818d02cff5dcd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.818943 containerd[1571]: time="2025-10-28T23:15:53.818909359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bn5ff,Uid:573d9c02-82cb-4bf8-9f40-79127dc42465,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbcc5e47e0d388a1f796536547666885f47dabab523ee2c13818d02cff5dcd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.819099 kubelet[2719]: E1028 23:15:53.819066 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbcc5e47e0d388a1f796536547666885f47dabab523ee2c13818d02cff5dcd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:15:53.819145 kubelet[2719]: E1028 23:15:53.819129 2719 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbcc5e47e0d388a1f796536547666885f47dabab523ee2c13818d02cff5dcd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bn5ff" Oct 28 23:15:53.819170 kubelet[2719]: E1028 23:15:53.819145 2719 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dbcc5e47e0d388a1f796536547666885f47dabab523ee2c13818d02cff5dcd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bn5ff" Oct 28 23:15:53.819230 kubelet[2719]: E1028 23:15:53.819206 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-bn5ff_calico-system(573d9c02-82cb-4bf8-9f40-79127dc42465)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-bn5ff_calico-system(573d9c02-82cb-4bf8-9f40-79127dc42465)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dbcc5e47e0d388a1f796536547666885f47dabab523ee2c13818d02cff5dcd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-bn5ff" podUID="573d9c02-82cb-4bf8-9f40-79127dc42465" Oct 28 23:15:53.835560 kubelet[2719]: E1028 23:15:53.835532 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:53.837762 containerd[1571]: time="2025-10-28T23:15:53.837731717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 28 23:15:54.590010 systemd[1]: run-netns-cni\x2d2773c2b3\x2df355\x2da053\x2d97bc\x2d596f2351f968.mount: Deactivated successfully. Oct 28 23:15:54.590102 systemd[1]: run-netns-cni\x2d3b2fdb93\x2d7173\x2d0560\x2db4f1\x2dbd75b3d3fff3.mount: Deactivated successfully. Oct 28 23:15:54.590145 systemd[1]: run-netns-cni\x2d6aa439fc\x2d7c18\x2d6cf3\x2da31d\x2d6587a76ccf5e.mount: Deactivated successfully. Oct 28 23:15:57.728309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204063300.mount: Deactivated successfully. Oct 28 23:15:58.018035 containerd[1571]: time="2025-10-28T23:15:58.017920282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:58.018496 containerd[1571]: time="2025-10-28T23:15:58.018460842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 28 23:15:58.019518 containerd[1571]: time="2025-10-28T23:15:58.019482760Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:58.021256 containerd[1571]: time="2025-10-28T23:15:58.021226037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:15:58.022122 containerd[1571]: time="2025-10-28T23:15:58.021991676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.184221879s" Oct 28 23:15:58.022122 containerd[1571]: time="2025-10-28T23:15:58.022024956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 28 23:15:58.033679 containerd[1571]: time="2025-10-28T23:15:58.033196738Z" level=info msg="CreateContainer within sandbox \"7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 28 23:15:58.042257 containerd[1571]: time="2025-10-28T23:15:58.042207563Z" level=info msg="Container 94cc48291861ed7baf92914ea14167583393e879b50af378de4518adb65d69bf: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:15:58.050289 containerd[1571]: time="2025-10-28T23:15:58.050233630Z" level=info msg="CreateContainer within sandbox \"7bd6cf348681a35c069a1a63d5782989616dade9ef33425d5e438c38facbf3f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"94cc48291861ed7baf92914ea14167583393e879b50af378de4518adb65d69bf\"" Oct 28 23:15:58.050725 containerd[1571]: time="2025-10-28T23:15:58.050697230Z" level=info msg="StartContainer for \"94cc48291861ed7baf92914ea14167583393e879b50af378de4518adb65d69bf\"" Oct 28 23:15:58.052016 containerd[1571]: time="2025-10-28T23:15:58.051992747Z" level=info msg="connecting to shim 94cc48291861ed7baf92914ea14167583393e879b50af378de4518adb65d69bf" address="unix:///run/containerd/s/7854fc5847537e1664c6833dee5420728a5957d7024619a3a5082fcd44120487" protocol=ttrpc version=3 Oct 28 23:15:58.072579 systemd[1]: Started cri-containerd-94cc48291861ed7baf92914ea14167583393e879b50af378de4518adb65d69bf.scope - libcontainer container 94cc48291861ed7baf92914ea14167583393e879b50af378de4518adb65d69bf. Oct 28 23:15:58.108668 containerd[1571]: time="2025-10-28T23:15:58.108631776Z" level=info msg="StartContainer for \"94cc48291861ed7baf92914ea14167583393e879b50af378de4518adb65d69bf\" returns successfully" Oct 28 23:15:58.224965 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 28 23:15:58.225101 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 28 23:15:58.460865 kubelet[2719]: I1028 23:15:58.460823 2719 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a857c78-358a-4098-86ca-4d159e537b48-whisker-backend-key-pair\") pod \"1a857c78-358a-4098-86ca-4d159e537b48\" (UID: \"1a857c78-358a-4098-86ca-4d159e537b48\") " Oct 28 23:15:58.462654 kubelet[2719]: I1028 23:15:58.460877 2719 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a857c78-358a-4098-86ca-4d159e537b48-whisker-ca-bundle\") pod \"1a857c78-358a-4098-86ca-4d159e537b48\" (UID: \"1a857c78-358a-4098-86ca-4d159e537b48\") " Oct 28 23:15:58.462654 kubelet[2719]: I1028 23:15:58.460902 2719 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8shgk\" (UniqueName: \"kubernetes.io/projected/1a857c78-358a-4098-86ca-4d159e537b48-kube-api-access-8shgk\") pod \"1a857c78-358a-4098-86ca-4d159e537b48\" (UID: \"1a857c78-358a-4098-86ca-4d159e537b48\") " Oct 28 23:15:58.478975 kubelet[2719]: I1028 23:15:58.478905 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a857c78-358a-4098-86ca-4d159e537b48-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1a857c78-358a-4098-86ca-4d159e537b48" (UID: "1a857c78-358a-4098-86ca-4d159e537b48"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 23:15:58.481252 kubelet[2719]: I1028 23:15:58.481210 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a857c78-358a-4098-86ca-4d159e537b48-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1a857c78-358a-4098-86ca-4d159e537b48" (UID: "1a857c78-358a-4098-86ca-4d159e537b48"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 28 23:15:58.481721 kubelet[2719]: I1028 23:15:58.481675 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a857c78-358a-4098-86ca-4d159e537b48-kube-api-access-8shgk" (OuterVolumeSpecName: "kube-api-access-8shgk") pod "1a857c78-358a-4098-86ca-4d159e537b48" (UID: "1a857c78-358a-4098-86ca-4d159e537b48"). InnerVolumeSpecName "kube-api-access-8shgk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 23:15:58.562099 kubelet[2719]: I1028 23:15:58.562056 2719 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a857c78-358a-4098-86ca-4d159e537b48-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 28 23:15:58.562099 kubelet[2719]: I1028 23:15:58.562093 2719 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a857c78-358a-4098-86ca-4d159e537b48-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 28 23:15:58.562099 kubelet[2719]: I1028 23:15:58.562102 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8shgk\" (UniqueName: \"kubernetes.io/projected/1a857c78-358a-4098-86ca-4d159e537b48-kube-api-access-8shgk\") on node \"localhost\" DevicePath \"\"" Oct 28 23:15:58.728519 systemd[1]: var-lib-kubelet-pods-1a857c78\x2d358a\x2d4098\x2d86ca\x2d4d159e537b48-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8shgk.mount: Deactivated successfully. Oct 28 23:15:58.728773 systemd[1]: var-lib-kubelet-pods-1a857c78\x2d358a\x2d4098\x2d86ca\x2d4d159e537b48-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 28 23:15:58.842454 kubelet[2719]: I1028 23:15:58.842328 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 23:15:58.842777 kubelet[2719]: E1028 23:15:58.842724 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:58.875645 kubelet[2719]: E1028 23:15:58.875601 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:58.875903 kubelet[2719]: E1028 23:15:58.875879 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:58.882114 systemd[1]: Removed slice kubepods-besteffort-pod1a857c78_358a_4098_86ca_4d159e537b48.slice - libcontainer container kubepods-besteffort-pod1a857c78_358a_4098_86ca_4d159e537b48.slice. Oct 28 23:15:58.897103 kubelet[2719]: I1028 23:15:58.896935 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2rcch" podStartSLOduration=1.40502851 podStartE2EDuration="12.894055909s" podCreationTimestamp="2025-10-28 23:15:46 +0000 UTC" firstStartedPulling="2025-10-28 23:15:46.533705116 +0000 UTC m=+24.912668303" lastFinishedPulling="2025-10-28 23:15:58.022732555 +0000 UTC m=+36.401695702" observedRunningTime="2025-10-28 23:15:58.89347663 +0000 UTC m=+37.272439817" watchObservedRunningTime="2025-10-28 23:15:58.894055909 +0000 UTC m=+37.273019096" Oct 28 23:15:58.939844 systemd[1]: Created slice kubepods-besteffort-pod029c1d7b_85d2_40f9_a251_8e93ec6d00e8.slice - libcontainer container kubepods-besteffort-pod029c1d7b_85d2_40f9_a251_8e93ec6d00e8.slice. Oct 28 23:15:58.964517 kubelet[2719]: I1028 23:15:58.964482 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/029c1d7b-85d2-40f9-a251-8e93ec6d00e8-whisker-backend-key-pair\") pod \"whisker-7c96dcf9f6-f22wj\" (UID: \"029c1d7b-85d2-40f9-a251-8e93ec6d00e8\") " pod="calico-system/whisker-7c96dcf9f6-f22wj" Oct 28 23:15:58.964517 kubelet[2719]: I1028 23:15:58.964526 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmlm2\" (UniqueName: \"kubernetes.io/projected/029c1d7b-85d2-40f9-a251-8e93ec6d00e8-kube-api-access-fmlm2\") pod \"whisker-7c96dcf9f6-f22wj\" (UID: \"029c1d7b-85d2-40f9-a251-8e93ec6d00e8\") " pod="calico-system/whisker-7c96dcf9f6-f22wj" Oct 28 23:15:58.964667 kubelet[2719]: I1028 23:15:58.964570 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/029c1d7b-85d2-40f9-a251-8e93ec6d00e8-whisker-ca-bundle\") pod \"whisker-7c96dcf9f6-f22wj\" (UID: \"029c1d7b-85d2-40f9-a251-8e93ec6d00e8\") " pod="calico-system/whisker-7c96dcf9f6-f22wj" Oct 28 23:15:59.256844 containerd[1571]: time="2025-10-28T23:15:59.256803109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c96dcf9f6-f22wj,Uid:029c1d7b-85d2-40f9-a251-8e93ec6d00e8,Namespace:calico-system,Attempt:0,}" Oct 28 23:15:59.403292 systemd-networkd[1485]: cali4e80848f8a0: Link UP Oct 28 23:15:59.404227 systemd-networkd[1485]: cali4e80848f8a0: Gained carrier Oct 28 23:15:59.416010 containerd[1571]: 2025-10-28 23:15:59.280 [INFO][3868] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 23:15:59.416010 containerd[1571]: 2025-10-28 23:15:59.310 [INFO][3868] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0 whisker-7c96dcf9f6- calico-system 029c1d7b-85d2-40f9-a251-8e93ec6d00e8 950 0 2025-10-28 23:15:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c96dcf9f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7c96dcf9f6-f22wj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4e80848f8a0 [] [] }} ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Namespace="calico-system" Pod="whisker-7c96dcf9f6-f22wj" WorkloadEndpoint="localhost-k8s-whisker--7c96dcf9f6--f22wj-" Oct 28 23:15:59.416010 containerd[1571]: 2025-10-28 23:15:59.310 [INFO][3868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Namespace="calico-system" Pod="whisker-7c96dcf9f6-f22wj" WorkloadEndpoint="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" Oct 28 23:15:59.416010 containerd[1571]: 2025-10-28 23:15:59.362 [INFO][3883] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" HandleID="k8s-pod-network.284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Workload="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.362 [INFO][3883] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" HandleID="k8s-pod-network.284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Workload="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7c96dcf9f6-f22wj", "timestamp":"2025-10-28 23:15:59.36203331 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.362 [INFO][3883] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.362 [INFO][3883] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.362 [INFO][3883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.373 [INFO][3883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" host="localhost" Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.377 [INFO][3883] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.381 [INFO][3883] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.383 [INFO][3883] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.385 [INFO][3883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:15:59.416194 containerd[1571]: 2025-10-28 23:15:59.385 [INFO][3883] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" host="localhost" Oct 28 23:15:59.416413 containerd[1571]: 2025-10-28 23:15:59.386 [INFO][3883] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09 Oct 28 23:15:59.416413 containerd[1571]: 2025-10-28 23:15:59.389 [INFO][3883] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" host="localhost" Oct 28 23:15:59.416413 containerd[1571]: 2025-10-28 23:15:59.394 [INFO][3883] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" host="localhost" Oct 28 23:15:59.416413 containerd[1571]: 2025-10-28 23:15:59.394 [INFO][3883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" host="localhost" Oct 28 23:15:59.416413 containerd[1571]: 2025-10-28 23:15:59.394 [INFO][3883] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:15:59.416413 containerd[1571]: 2025-10-28 23:15:59.394 [INFO][3883] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" HandleID="k8s-pod-network.284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Workload="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" Oct 28 23:15:59.416602 containerd[1571]: 2025-10-28 23:15:59.396 [INFO][3868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Namespace="calico-system" Pod="whisker-7c96dcf9f6-f22wj" WorkloadEndpoint="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0", GenerateName:"whisker-7c96dcf9f6-", Namespace:"calico-system", SelfLink:"", UID:"029c1d7b-85d2-40f9-a251-8e93ec6d00e8", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c96dcf9f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7c96dcf9f6-f22wj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4e80848f8a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:15:59.416602 containerd[1571]: 2025-10-28 23:15:59.397 [INFO][3868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Namespace="calico-system" Pod="whisker-7c96dcf9f6-f22wj" WorkloadEndpoint="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" Oct 28 23:15:59.416673 containerd[1571]: 2025-10-28 23:15:59.397 [INFO][3868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e80848f8a0 ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Namespace="calico-system" Pod="whisker-7c96dcf9f6-f22wj" WorkloadEndpoint="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" Oct 28 23:15:59.416673 containerd[1571]: 2025-10-28 23:15:59.404 [INFO][3868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Namespace="calico-system" Pod="whisker-7c96dcf9f6-f22wj" WorkloadEndpoint="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" Oct 28 23:15:59.416713 containerd[1571]: 2025-10-28 23:15:59.404 [INFO][3868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Namespace="calico-system" Pod="whisker-7c96dcf9f6-f22wj" WorkloadEndpoint="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0", GenerateName:"whisker-7c96dcf9f6-", Namespace:"calico-system", SelfLink:"", UID:"029c1d7b-85d2-40f9-a251-8e93ec6d00e8", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c96dcf9f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09", Pod:"whisker-7c96dcf9f6-f22wj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4e80848f8a0", MAC:"56:8b:56:a0:9c:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:15:59.416758 containerd[1571]: 2025-10-28 23:15:59.412 [INFO][3868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" Namespace="calico-system" Pod="whisker-7c96dcf9f6-f22wj" WorkloadEndpoint="localhost-k8s-whisker--7c96dcf9f6--f22wj-eth0" Oct 28 23:15:59.464624 containerd[1571]: time="2025-10-28T23:15:59.464553715Z" level=info msg="connecting to shim 284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09" address="unix:///run/containerd/s/9957f5c4dff0515aac9ac1588bf9177a235553c669b01ca6848996bbd6bbf7b9" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:15:59.494595 systemd[1]: Started cri-containerd-284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09.scope - libcontainer container 284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09. Oct 28 23:15:59.507304 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:15:59.595149 containerd[1571]: time="2025-10-28T23:15:59.595099758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c96dcf9f6-f22wj,Uid:029c1d7b-85d2-40f9-a251-8e93ec6d00e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"284251bf4a46e40030b0f9f5098cbd82e53e9a9c69e7c1b9b280ba5e99647d09\"" Oct 28 23:15:59.598448 containerd[1571]: time="2025-10-28T23:15:59.597059155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 23:15:59.708384 kubelet[2719]: I1028 23:15:59.708333 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a857c78-358a-4098-86ca-4d159e537b48" path="/var/lib/kubelet/pods/1a857c78-358a-4098-86ca-4d159e537b48/volumes" Oct 28 23:15:59.805105 containerd[1571]: time="2025-10-28T23:15:59.804975240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:15:59.806906 containerd[1571]: time="2025-10-28T23:15:59.806802277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 23:15:59.806906 containerd[1571]: time="2025-10-28T23:15:59.806860717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 23:15:59.807101 kubelet[2719]: E1028 23:15:59.807025 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:15:59.808993 kubelet[2719]: E1028 23:15:59.808944 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:15:59.810953 kubelet[2719]: E1028 23:15:59.810921 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c96dcf9f6-f22wj_calico-system(029c1d7b-85d2-40f9-a251-8e93ec6d00e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 23:15:59.812106 containerd[1571]: time="2025-10-28T23:15:59.812003789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 23:15:59.883439 kubelet[2719]: I1028 23:15:59.881518 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 23:15:59.883932 kubelet[2719]: E1028 23:15:59.883901 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:15:59.919802 systemd-networkd[1485]: vxlan.calico: Link UP Oct 28 23:15:59.919807 systemd-networkd[1485]: vxlan.calico: Gained carrier Oct 28 23:16:00.027961 containerd[1571]: time="2025-10-28T23:16:00.027906985Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:00.028931 containerd[1571]: time="2025-10-28T23:16:00.028878904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 23:16:00.028975 containerd[1571]: time="2025-10-28T23:16:00.028944064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 23:16:00.029181 kubelet[2719]: E1028 23:16:00.029100 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:16:00.029181 kubelet[2719]: E1028 23:16:00.029175 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:16:00.029296 kubelet[2719]: E1028 23:16:00.029264 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c96dcf9f6-f22wj_calico-system(029c1d7b-85d2-40f9-a251-8e93ec6d00e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:00.030547 kubelet[2719]: E1028 23:16:00.030489 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c96dcf9f6-f22wj" podUID="029c1d7b-85d2-40f9-a251-8e93ec6d00e8" Oct 28 23:16:00.290537 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:53794.service - OpenSSH per-connection server daemon (10.0.0.1:53794). Oct 28 23:16:00.346437 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 53794 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:00.347908 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:00.352140 systemd-logind[1547]: New session 9 of user core. Oct 28 23:16:00.360597 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 28 23:16:00.497589 sshd[4158]: Connection closed by 10.0.0.1 port 53794 Oct 28 23:16:00.497891 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:00.501396 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:53794.service: Deactivated successfully. Oct 28 23:16:00.503043 systemd[1]: session-9.scope: Deactivated successfully. Oct 28 23:16:00.505014 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Oct 28 23:16:00.506313 systemd-logind[1547]: Removed session 9. Oct 28 23:16:00.748655 systemd-networkd[1485]: cali4e80848f8a0: Gained IPv6LL Oct 28 23:16:00.885206 kubelet[2719]: E1028 23:16:00.885133 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c96dcf9f6-f22wj" podUID="029c1d7b-85d2-40f9-a251-8e93ec6d00e8" Oct 28 23:16:01.324644 systemd-networkd[1485]: vxlan.calico: Gained IPv6LL Oct 28 23:16:04.709004 containerd[1571]: time="2025-10-28T23:16:04.708954605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8dd56995c-pndrt,Uid:84a92fac-14cc-4b8a-a065-7ef0df05e34f,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:16:04.804913 systemd-networkd[1485]: cali666ed96b12c: Link UP Oct 28 23:16:04.805104 systemd-networkd[1485]: cali666ed96b12c: Gained carrier Oct 28 23:16:04.817802 containerd[1571]: 2025-10-28 23:16:04.744 [INFO][4178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0 calico-apiserver-8dd56995c- calico-apiserver 84a92fac-14cc-4b8a-a065-7ef0df05e34f 875 0 2025-10-28 23:15:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8dd56995c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8dd56995c-pndrt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali666ed96b12c [] [] }} ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-pndrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--pndrt-" Oct 28 23:16:04.817802 containerd[1571]: 2025-10-28 23:16:04.744 [INFO][4178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-pndrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" Oct 28 23:16:04.817802 containerd[1571]: 2025-10-28 23:16:04.766 [INFO][4193] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" HandleID="k8s-pod-network.a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Workload="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.766 [INFO][4193] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" HandleID="k8s-pod-network.a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Workload="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d5c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8dd56995c-pndrt", "timestamp":"2025-10-28 23:16:04.766129423 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.766 [INFO][4193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.766 [INFO][4193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.766 [INFO][4193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.776 [INFO][4193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" host="localhost" Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.780 [INFO][4193] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.785 [INFO][4193] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.786 [INFO][4193] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.788 [INFO][4193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:04.817999 containerd[1571]: 2025-10-28 23:16:04.789 [INFO][4193] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" host="localhost" Oct 28 23:16:04.818196 containerd[1571]: 2025-10-28 23:16:04.790 [INFO][4193] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0 Oct 28 23:16:04.818196 containerd[1571]: 2025-10-28 23:16:04.793 [INFO][4193] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" host="localhost" Oct 28 23:16:04.818196 containerd[1571]: 2025-10-28 23:16:04.798 [INFO][4193] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" host="localhost" Oct 28 23:16:04.818196 containerd[1571]: 2025-10-28 23:16:04.798 [INFO][4193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" host="localhost" Oct 28 23:16:04.818196 containerd[1571]: 2025-10-28 23:16:04.798 [INFO][4193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:16:04.818196 containerd[1571]: 2025-10-28 23:16:04.798 [INFO][4193] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" HandleID="k8s-pod-network.a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Workload="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" Oct 28 23:16:04.818363 containerd[1571]: 2025-10-28 23:16:04.800 [INFO][4178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-pndrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0", GenerateName:"calico-apiserver-8dd56995c-", Namespace:"calico-apiserver", SelfLink:"", UID:"84a92fac-14cc-4b8a-a065-7ef0df05e34f", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8dd56995c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8dd56995c-pndrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali666ed96b12c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:04.818414 containerd[1571]: 2025-10-28 23:16:04.800 [INFO][4178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-pndrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" Oct 28 23:16:04.818414 containerd[1571]: 2025-10-28 23:16:04.800 [INFO][4178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali666ed96b12c ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-pndrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" Oct 28 23:16:04.818414 containerd[1571]: 2025-10-28 23:16:04.804 [INFO][4178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-pndrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" Oct 28 23:16:04.819470 containerd[1571]: 2025-10-28 23:16:04.804 [INFO][4178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-pndrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0", GenerateName:"calico-apiserver-8dd56995c-", Namespace:"calico-apiserver", SelfLink:"", UID:"84a92fac-14cc-4b8a-a065-7ef0df05e34f", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8dd56995c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0", Pod:"calico-apiserver-8dd56995c-pndrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali666ed96b12c", MAC:"7a:4d:45:c4:05:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:04.819757 containerd[1571]: 2025-10-28 23:16:04.815 [INFO][4178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-pndrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--pndrt-eth0" Oct 28 23:16:04.836363 containerd[1571]: time="2025-10-28T23:16:04.836311106Z" level=info msg="connecting to shim a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0" address="unix:///run/containerd/s/680592c52f5e0f0942df462f5c6134de669baa3a2a0e7a37489b6a6f455563c1" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:16:04.868596 systemd[1]: Started cri-containerd-a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0.scope - libcontainer container a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0. Oct 28 23:16:04.890967 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:16:04.910350 containerd[1571]: time="2025-10-28T23:16:04.910296985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8dd56995c-pndrt,Uid:84a92fac-14cc-4b8a-a065-7ef0df05e34f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a360290d48d6ebb30a64da659863635b735df9c4cfcba75857f95195dfee78c0\"" Oct 28 23:16:04.913150 containerd[1571]: time="2025-10-28T23:16:04.913119822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:16:05.122360 containerd[1571]: time="2025-10-28T23:16:05.122291801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:05.123315 containerd[1571]: time="2025-10-28T23:16:05.123276840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:16:05.123394 containerd[1571]: time="2025-10-28T23:16:05.123362000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:16:05.123750 kubelet[2719]: E1028 23:16:05.123550 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:16:05.123750 kubelet[2719]: E1028 23:16:05.123599 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:16:05.123750 kubelet[2719]: E1028 23:16:05.123678 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8dd56995c-pndrt_calico-apiserver(84a92fac-14cc-4b8a-a065-7ef0df05e34f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:05.123750 kubelet[2719]: E1028 23:16:05.123709 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" podUID="84a92fac-14cc-4b8a-a065-7ef0df05e34f" Oct 28 23:16:05.513410 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:53808.service - OpenSSH per-connection server daemon (10.0.0.1:53808). Oct 28 23:16:05.563264 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 53808 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:05.564713 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:05.569103 systemd-logind[1547]: New session 10 of user core. Oct 28 23:16:05.575575 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 28 23:16:05.671376 sshd[4270]: Connection closed by 10.0.0.1 port 53808 Oct 28 23:16:05.671203 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:05.675135 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:53808.service: Deactivated successfully. Oct 28 23:16:05.677038 systemd[1]: session-10.scope: Deactivated successfully. Oct 28 23:16:05.677858 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Oct 28 23:16:05.678830 systemd-logind[1547]: Removed session 10. Oct 28 23:16:05.706274 containerd[1571]: time="2025-10-28T23:16:05.706186721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bn5ff,Uid:573d9c02-82cb-4bf8-9f40-79127dc42465,Namespace:calico-system,Attempt:0,}" Oct 28 23:16:05.707328 kubelet[2719]: E1028 23:16:05.707294 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:05.707739 containerd[1571]: time="2025-10-28T23:16:05.707683400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q9hnr,Uid:5d6d39cf-8a2a-454d-9ef8-fd471b116726,Namespace:kube-system,Attempt:0,}" Oct 28 23:16:05.709272 containerd[1571]: time="2025-10-28T23:16:05.709126238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf76cbd9f-lbvzw,Uid:03d12470-929a-414d-b9fc-0eb2e9388b7a,Namespace:calico-system,Attempt:0,}" Oct 28 23:16:05.826069 systemd-networkd[1485]: calid29f185517c: Link UP Oct 28 23:16:05.826302 systemd-networkd[1485]: calid29f185517c: Gained carrier Oct 28 23:16:05.834388 containerd[1571]: 2025-10-28 23:16:05.756 [INFO][4286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--bn5ff-eth0 goldmane-7c778bb748- calico-system 573d9c02-82cb-4bf8-9f40-79127dc42465 878 0 2025-10-28 23:15:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-bn5ff eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid29f185517c [] [] }} ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Namespace="calico-system" Pod="goldmane-7c778bb748-bn5ff" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bn5ff-" Oct 28 23:16:05.834388 containerd[1571]: 2025-10-28 23:16:05.756 [INFO][4286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Namespace="calico-system" Pod="goldmane-7c778bb748-bn5ff" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" Oct 28 23:16:05.834388 containerd[1571]: 2025-10-28 23:16:05.786 [INFO][4332] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" HandleID="k8s-pod-network.447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Workload="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.786 [INFO][4332] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" HandleID="k8s-pod-network.447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Workload="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-bn5ff", "timestamp":"2025-10-28 23:16:05.786145919 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.786 [INFO][4332] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.786 [INFO][4332] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.786 [INFO][4332] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.796 [INFO][4332] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" host="localhost" Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.800 [INFO][4332] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.803 [INFO][4332] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.806 [INFO][4332] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.808 [INFO][4332] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:05.834755 containerd[1571]: 2025-10-28 23:16:05.808 [INFO][4332] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" host="localhost" Oct 28 23:16:05.835300 containerd[1571]: 2025-10-28 23:16:05.809 [INFO][4332] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279 Oct 28 23:16:05.835300 containerd[1571]: 2025-10-28 23:16:05.813 [INFO][4332] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" host="localhost" Oct 28 23:16:05.835300 containerd[1571]: 2025-10-28 23:16:05.818 [INFO][4332] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" host="localhost" Oct 28 23:16:05.835300 containerd[1571]: 2025-10-28 23:16:05.818 [INFO][4332] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" host="localhost" Oct 28 23:16:05.835300 containerd[1571]: 2025-10-28 23:16:05.818 [INFO][4332] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:16:05.835300 containerd[1571]: 2025-10-28 23:16:05.818 [INFO][4332] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" HandleID="k8s-pod-network.447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Workload="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" Oct 28 23:16:05.835415 containerd[1571]: 2025-10-28 23:16:05.820 [INFO][4286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Namespace="calico-system" Pod="goldmane-7c778bb748-bn5ff" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--bn5ff-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"573d9c02-82cb-4bf8-9f40-79127dc42465", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-bn5ff", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid29f185517c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:05.835415 containerd[1571]: 2025-10-28 23:16:05.820 [INFO][4286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Namespace="calico-system" Pod="goldmane-7c778bb748-bn5ff" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" Oct 28 23:16:05.835519 containerd[1571]: 2025-10-28 23:16:05.820 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid29f185517c ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Namespace="calico-system" Pod="goldmane-7c778bb748-bn5ff" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" Oct 28 23:16:05.835519 containerd[1571]: 2025-10-28 23:16:05.822 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Namespace="calico-system" Pod="goldmane-7c778bb748-bn5ff" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" Oct 28 23:16:05.835561 containerd[1571]: 2025-10-28 23:16:05.822 [INFO][4286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Namespace="calico-system" Pod="goldmane-7c778bb748-bn5ff" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--bn5ff-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"573d9c02-82cb-4bf8-9f40-79127dc42465", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279", Pod:"goldmane-7c778bb748-bn5ff", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid29f185517c", MAC:"fe:dd:d1:42:7e:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:05.835608 containerd[1571]: 2025-10-28 23:16:05.831 [INFO][4286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" Namespace="calico-system" Pod="goldmane-7c778bb748-bn5ff" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bn5ff-eth0" Oct 28 23:16:05.859548 containerd[1571]: time="2025-10-28T23:16:05.859509804Z" level=info msg="connecting to shim 447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279" address="unix:///run/containerd/s/40c30be915119cc936edc773f3248e35c7208e7bfe764d950dc90e7d61882288" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:16:05.886678 systemd[1]: Started cri-containerd-447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279.scope - libcontainer container 447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279. Oct 28 23:16:05.901993 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:16:05.902776 kubelet[2719]: E1028 23:16:05.902744 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" podUID="84a92fac-14cc-4b8a-a065-7ef0df05e34f" Oct 28 23:16:05.942500 containerd[1571]: time="2025-10-28T23:16:05.942461439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bn5ff,Uid:573d9c02-82cb-4bf8-9f40-79127dc42465,Namespace:calico-system,Attempt:0,} returns sandbox id \"447bde291c31ac49f181c0dd7031870025214eb580c07c0446235c12061b9279\"" Oct 28 23:16:05.944988 containerd[1571]: time="2025-10-28T23:16:05.944936716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 23:16:05.957227 systemd-networkd[1485]: cali1ba4ef6d110: Link UP Oct 28 23:16:05.958342 systemd-networkd[1485]: cali1ba4ef6d110: Gained carrier Oct 28 23:16:05.970666 containerd[1571]: 2025-10-28 23:16:05.757 [INFO][4304] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0 calico-kube-controllers-cf76cbd9f- calico-system 03d12470-929a-414d-b9fc-0eb2e9388b7a 877 0 2025-10-28 23:15:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cf76cbd9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-cf76cbd9f-lbvzw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1ba4ef6d110 [] [] }} ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Namespace="calico-system" Pod="calico-kube-controllers-cf76cbd9f-lbvzw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-" Oct 28 23:16:05.970666 containerd[1571]: 2025-10-28 23:16:05.757 [INFO][4304] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Namespace="calico-system" Pod="calico-kube-controllers-cf76cbd9f-lbvzw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" Oct 28 23:16:05.970666 containerd[1571]: 2025-10-28 23:16:05.789 [INFO][4330] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" HandleID="k8s-pod-network.8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Workload="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.790 [INFO][4330] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" HandleID="k8s-pod-network.8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Workload="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035c6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-cf76cbd9f-lbvzw", "timestamp":"2025-10-28 23:16:05.789615476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.790 [INFO][4330] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.818 [INFO][4330] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.818 [INFO][4330] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.897 [INFO][4330] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" host="localhost" Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.904 [INFO][4330] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.910 [INFO][4330] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.912 [INFO][4330] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.922 [INFO][4330] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:05.970845 containerd[1571]: 2025-10-28 23:16:05.922 [INFO][4330] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" host="localhost" Oct 28 23:16:05.971039 containerd[1571]: 2025-10-28 23:16:05.924 [INFO][4330] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b Oct 28 23:16:05.971039 containerd[1571]: 2025-10-28 23:16:05.935 [INFO][4330] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" host="localhost" Oct 28 23:16:05.971039 containerd[1571]: 2025-10-28 23:16:05.946 [INFO][4330] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" host="localhost" Oct 28 23:16:05.971039 containerd[1571]: 2025-10-28 23:16:05.946 [INFO][4330] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" host="localhost" Oct 28 23:16:05.971039 containerd[1571]: 2025-10-28 23:16:05.947 [INFO][4330] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:16:05.971039 containerd[1571]: 2025-10-28 23:16:05.947 [INFO][4330] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" HandleID="k8s-pod-network.8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Workload="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" Oct 28 23:16:05.971150 containerd[1571]: 2025-10-28 23:16:05.953 [INFO][4304] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Namespace="calico-system" Pod="calico-kube-controllers-cf76cbd9f-lbvzw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0", GenerateName:"calico-kube-controllers-cf76cbd9f-", Namespace:"calico-system", SelfLink:"", UID:"03d12470-929a-414d-b9fc-0eb2e9388b7a", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf76cbd9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-cf76cbd9f-lbvzw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ba4ef6d110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:05.971197 containerd[1571]: 2025-10-28 23:16:05.953 [INFO][4304] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Namespace="calico-system" Pod="calico-kube-controllers-cf76cbd9f-lbvzw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" Oct 28 23:16:05.971197 containerd[1571]: 2025-10-28 23:16:05.953 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ba4ef6d110 ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Namespace="calico-system" Pod="calico-kube-controllers-cf76cbd9f-lbvzw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" Oct 28 23:16:05.971197 containerd[1571]: 2025-10-28 23:16:05.958 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Namespace="calico-system" Pod="calico-kube-controllers-cf76cbd9f-lbvzw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" Oct 28 23:16:05.971317 containerd[1571]: 2025-10-28 23:16:05.959 [INFO][4304] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Namespace="calico-system" Pod="calico-kube-controllers-cf76cbd9f-lbvzw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0", GenerateName:"calico-kube-controllers-cf76cbd9f-", Namespace:"calico-system", SelfLink:"", UID:"03d12470-929a-414d-b9fc-0eb2e9388b7a", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cf76cbd9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b", Pod:"calico-kube-controllers-cf76cbd9f-lbvzw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ba4ef6d110", MAC:"a6:0e:69:37:3d:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:05.971370 containerd[1571]: 2025-10-28 23:16:05.968 [INFO][4304] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" Namespace="calico-system" Pod="calico-kube-controllers-cf76cbd9f-lbvzw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cf76cbd9f--lbvzw-eth0" Oct 28 23:16:05.991297 containerd[1571]: time="2025-10-28T23:16:05.990239630Z" level=info msg="connecting to shim 8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b" address="unix:///run/containerd/s/6ca707db971efd175c058e99855bddce0168a1b8bd17beeab8d0bcc4444c1e64" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:16:06.016697 systemd[1]: Started cri-containerd-8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b.scope - libcontainer container 8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b. Oct 28 23:16:06.037699 systemd-networkd[1485]: cali952d72727a0: Link UP Oct 28 23:16:06.038523 systemd-networkd[1485]: cali952d72727a0: Gained carrier Oct 28 23:16:06.039413 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:16:06.052975 containerd[1571]: 2025-10-28 23:16:05.758 [INFO][4284] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--q9hnr-eth0 coredns-66bc5c9577- kube-system 5d6d39cf-8a2a-454d-9ef8-fd471b116726 880 0 2025-10-28 23:15:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-q9hnr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali952d72727a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Namespace="kube-system" Pod="coredns-66bc5c9577-q9hnr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q9hnr-" Oct 28 23:16:06.052975 containerd[1571]: 2025-10-28 23:16:05.758 [INFO][4284] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Namespace="kube-system" Pod="coredns-66bc5c9577-q9hnr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" Oct 28 23:16:06.052975 containerd[1571]: 2025-10-28 23:16:05.792 [INFO][4338] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" HandleID="k8s-pod-network.8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Workload="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:05.792 [INFO][4338] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" HandleID="k8s-pod-network.8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Workload="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000495050), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-q9hnr", "timestamp":"2025-10-28 23:16:05.792688392 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:05.792 [INFO][4338] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:05.947 [INFO][4338] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:05.947 [INFO][4338] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:05.998 [INFO][4338] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" host="localhost" Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:06.004 [INFO][4338] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:06.011 [INFO][4338] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:06.014 [INFO][4338] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:06.016 [INFO][4338] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:06.053225 containerd[1571]: 2025-10-28 23:16:06.016 [INFO][4338] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" host="localhost" Oct 28 23:16:06.053847 containerd[1571]: 2025-10-28 23:16:06.018 [INFO][4338] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06 Oct 28 23:16:06.053847 containerd[1571]: 2025-10-28 23:16:06.022 [INFO][4338] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" host="localhost" Oct 28 23:16:06.053847 containerd[1571]: 2025-10-28 23:16:06.031 [INFO][4338] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" host="localhost" Oct 28 23:16:06.053847 containerd[1571]: 2025-10-28 23:16:06.031 [INFO][4338] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" host="localhost" Oct 28 23:16:06.053847 containerd[1571]: 2025-10-28 23:16:06.031 [INFO][4338] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:16:06.053847 containerd[1571]: 2025-10-28 23:16:06.031 [INFO][4338] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" HandleID="k8s-pod-network.8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Workload="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" Oct 28 23:16:06.054004 containerd[1571]: 2025-10-28 23:16:06.034 [INFO][4284] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Namespace="kube-system" Pod="coredns-66bc5c9577-q9hnr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--q9hnr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5d6d39cf-8a2a-454d-9ef8-fd471b116726", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-q9hnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali952d72727a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:06.054004 containerd[1571]: 2025-10-28 23:16:06.034 [INFO][4284] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Namespace="kube-system" Pod="coredns-66bc5c9577-q9hnr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" Oct 28 23:16:06.054004 containerd[1571]: 2025-10-28 23:16:06.034 [INFO][4284] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali952d72727a0 ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Namespace="kube-system" Pod="coredns-66bc5c9577-q9hnr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" Oct 28 23:16:06.054004 containerd[1571]: 2025-10-28 23:16:06.039 [INFO][4284] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Namespace="kube-system" Pod="coredns-66bc5c9577-q9hnr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" Oct 28 23:16:06.054004 containerd[1571]: 2025-10-28 23:16:06.039 [INFO][4284] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Namespace="kube-system" Pod="coredns-66bc5c9577-q9hnr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--q9hnr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5d6d39cf-8a2a-454d-9ef8-fd471b116726", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06", Pod:"coredns-66bc5c9577-q9hnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali952d72727a0", MAC:"b2:35:8e:31:81:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:06.054004 containerd[1571]: 2025-10-28 23:16:06.050 [INFO][4284] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" Namespace="kube-system" Pod="coredns-66bc5c9577-q9hnr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--q9hnr-eth0" Oct 28 23:16:06.074750 containerd[1571]: time="2025-10-28T23:16:06.074695308Z" level=info msg="connecting to shim 8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06" address="unix:///run/containerd/s/9e84b69fd70ecf5d171b671cb4771f64cc3d29076a8b742c1cf5a8ae08733eb5" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:16:06.076711 containerd[1571]: time="2025-10-28T23:16:06.076602426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cf76cbd9f-lbvzw,Uid:03d12470-929a-414d-b9fc-0eb2e9388b7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8144c56546c68f438a02586425f74a579b13323972d0d9865adb82f5c8a3db5b\"" Oct 28 23:16:06.101597 systemd[1]: Started cri-containerd-8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06.scope - libcontainer container 8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06. Oct 28 23:16:06.112969 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:16:06.162953 containerd[1571]: time="2025-10-28T23:16:06.162910983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:06.166632 containerd[1571]: time="2025-10-28T23:16:06.166603419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-q9hnr,Uid:5d6d39cf-8a2a-454d-9ef8-fd471b116726,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06\"" Oct 28 23:16:06.167528 kubelet[2719]: E1028 23:16:06.167505 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:06.192584 containerd[1571]: time="2025-10-28T23:16:06.192534274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 23:16:06.192677 containerd[1571]: time="2025-10-28T23:16:06.192606114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 23:16:06.192795 kubelet[2719]: E1028 23:16:06.192759 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:16:06.192846 kubelet[2719]: E1028 23:16:06.192814 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:16:06.193235 kubelet[2719]: E1028 23:16:06.193181 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bn5ff_calico-system(573d9c02-82cb-4bf8-9f40-79127dc42465): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:06.193329 kubelet[2719]: E1028 23:16:06.193232 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bn5ff" podUID="573d9c02-82cb-4bf8-9f40-79127dc42465" Oct 28 23:16:06.193792 containerd[1571]: time="2025-10-28T23:16:06.193759833Z" level=info msg="CreateContainer within sandbox \"8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 23:16:06.216015 containerd[1571]: time="2025-10-28T23:16:06.215962652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 23:16:06.222608 containerd[1571]: time="2025-10-28T23:16:06.222565645Z" level=info msg="Container 7e0cbb510d6dd33549f4963a64e0d4a2ee03d93a96ec3fdb4290138883f7c5f2: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:16:06.231176 containerd[1571]: time="2025-10-28T23:16:06.231128437Z" level=info msg="CreateContainer within sandbox \"8e4719f87ae19c3534f27bda3b63903f499731a9f71959a59d5850f18902cf06\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e0cbb510d6dd33549f4963a64e0d4a2ee03d93a96ec3fdb4290138883f7c5f2\"" Oct 28 23:16:06.235514 containerd[1571]: time="2025-10-28T23:16:06.232596836Z" level=info msg="StartContainer for \"7e0cbb510d6dd33549f4963a64e0d4a2ee03d93a96ec3fdb4290138883f7c5f2\"" Oct 28 23:16:06.235514 containerd[1571]: time="2025-10-28T23:16:06.233400675Z" level=info msg="connecting to shim 7e0cbb510d6dd33549f4963a64e0d4a2ee03d93a96ec3fdb4290138883f7c5f2" address="unix:///run/containerd/s/9e84b69fd70ecf5d171b671cb4771f64cc3d29076a8b742c1cf5a8ae08733eb5" protocol=ttrpc version=3 Oct 28 23:16:06.266609 systemd[1]: Started cri-containerd-7e0cbb510d6dd33549f4963a64e0d4a2ee03d93a96ec3fdb4290138883f7c5f2.scope - libcontainer container 7e0cbb510d6dd33549f4963a64e0d4a2ee03d93a96ec3fdb4290138883f7c5f2. Oct 28 23:16:06.294787 containerd[1571]: time="2025-10-28T23:16:06.294750416Z" level=info msg="StartContainer for \"7e0cbb510d6dd33549f4963a64e0d4a2ee03d93a96ec3fdb4290138883f7c5f2\" returns successfully" Oct 28 23:16:06.437695 containerd[1571]: time="2025-10-28T23:16:06.437652238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:06.438716 containerd[1571]: time="2025-10-28T23:16:06.438681517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 23:16:06.439003 containerd[1571]: time="2025-10-28T23:16:06.438739277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 23:16:06.439048 kubelet[2719]: E1028 23:16:06.439002 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:16:06.439109 kubelet[2719]: E1028 23:16:06.439056 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:16:06.439151 kubelet[2719]: E1028 23:16:06.439121 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-cf76cbd9f-lbvzw_calico-system(03d12470-929a-414d-b9fc-0eb2e9388b7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:06.439202 kubelet[2719]: E1028 23:16:06.439151 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" podUID="03d12470-929a-414d-b9fc-0eb2e9388b7a" Oct 28 23:16:06.700667 systemd-networkd[1485]: cali666ed96b12c: Gained IPv6LL Oct 28 23:16:06.901553 kubelet[2719]: E1028 23:16:06.901505 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:06.903393 kubelet[2719]: E1028 23:16:06.903356 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" podUID="03d12470-929a-414d-b9fc-0eb2e9388b7a" Oct 28 23:16:06.905472 kubelet[2719]: E1028 23:16:06.905422 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" podUID="84a92fac-14cc-4b8a-a065-7ef0df05e34f" Oct 28 23:16:06.905472 kubelet[2719]: E1028 23:16:06.905456 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bn5ff" podUID="573d9c02-82cb-4bf8-9f40-79127dc42465" Oct 28 23:16:06.959858 kubelet[2719]: I1028 23:16:06.959709 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-q9hnr" podStartSLOduration=38.959646655 podStartE2EDuration="38.959646655s" podCreationTimestamp="2025-10-28 23:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:16:06.945528309 +0000 UTC m=+45.324491616" watchObservedRunningTime="2025-10-28 23:16:06.959646655 +0000 UTC m=+45.338609842" Oct 28 23:16:07.084575 systemd-networkd[1485]: cali952d72727a0: Gained IPv6LL Oct 28 23:16:07.276594 systemd-networkd[1485]: calid29f185517c: Gained IPv6LL Oct 28 23:16:07.468571 systemd-networkd[1485]: cali1ba4ef6d110: Gained IPv6LL Oct 28 23:16:07.708846 containerd[1571]: time="2025-10-28T23:16:07.708713257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n4zdp,Uid:6b1cc5e3-6c35-4356-8831-57857e48a65e,Namespace:calico-system,Attempt:0,}" Oct 28 23:16:07.710240 containerd[1571]: time="2025-10-28T23:16:07.710211816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8dd56995c-k8qt2,Uid:e693546f-22c9-4f3e-b82e-1c2bd8d6de81,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:16:07.710864 kubelet[2719]: E1028 23:16:07.710839 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:07.715472 containerd[1571]: time="2025-10-28T23:16:07.712282294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qtgwt,Uid:eddf9348-9a5f-4ac7-b557-fae5d1e3fcff,Namespace:kube-system,Attempt:0,}" Oct 28 23:16:07.836509 systemd-networkd[1485]: calif2b2b9bb655: Link UP Oct 28 23:16:07.836696 systemd-networkd[1485]: calif2b2b9bb655: Gained carrier Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.756 [INFO][4559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--n4zdp-eth0 csi-node-driver- calico-system 6b1cc5e3-6c35-4356-8831-57857e48a65e 777 0 2025-10-28 23:15:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-n4zdp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif2b2b9bb655 [] [] }} ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Namespace="calico-system" Pod="csi-node-driver-n4zdp" WorkloadEndpoint="localhost-k8s-csi--node--driver--n4zdp-" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.756 [INFO][4559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Namespace="calico-system" Pod="csi-node-driver-n4zdp" WorkloadEndpoint="localhost-k8s-csi--node--driver--n4zdp-eth0" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.792 [INFO][4604] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" HandleID="k8s-pod-network.2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Workload="localhost-k8s-csi--node--driver--n4zdp-eth0" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.792 [INFO][4604] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" HandleID="k8s-pod-network.2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Workload="localhost-k8s-csi--node--driver--n4zdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-n4zdp", "timestamp":"2025-10-28 23:16:07.792130742 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.792 [INFO][4604] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.792 [INFO][4604] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.792 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.807 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" host="localhost" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.812 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.816 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.818 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.820 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.820 [INFO][4604] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" host="localhost" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.822 [INFO][4604] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6 Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.825 [INFO][4604] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" host="localhost" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.831 [INFO][4604] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" host="localhost" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.831 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" host="localhost" Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.831 [INFO][4604] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:16:07.851411 containerd[1571]: 2025-10-28 23:16:07.831 [INFO][4604] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" HandleID="k8s-pod-network.2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Workload="localhost-k8s-csi--node--driver--n4zdp-eth0" Oct 28 23:16:07.851951 containerd[1571]: 2025-10-28 23:16:07.833 [INFO][4559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Namespace="calico-system" Pod="csi-node-driver-n4zdp" WorkloadEndpoint="localhost-k8s-csi--node--driver--n4zdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n4zdp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b1cc5e3-6c35-4356-8831-57857e48a65e", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-n4zdp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2b2b9bb655", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:07.851951 containerd[1571]: 2025-10-28 23:16:07.833 [INFO][4559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Namespace="calico-system" Pod="csi-node-driver-n4zdp" WorkloadEndpoint="localhost-k8s-csi--node--driver--n4zdp-eth0" Oct 28 23:16:07.851951 containerd[1571]: 2025-10-28 23:16:07.833 [INFO][4559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2b2b9bb655 ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Namespace="calico-system" Pod="csi-node-driver-n4zdp" WorkloadEndpoint="localhost-k8s-csi--node--driver--n4zdp-eth0" Oct 28 23:16:07.851951 containerd[1571]: 2025-10-28 23:16:07.837 [INFO][4559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Namespace="calico-system" Pod="csi-node-driver-n4zdp" WorkloadEndpoint="localhost-k8s-csi--node--driver--n4zdp-eth0" Oct 28 23:16:07.851951 containerd[1571]: 2025-10-28 23:16:07.837 [INFO][4559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Namespace="calico-system" Pod="csi-node-driver-n4zdp" WorkloadEndpoint="localhost-k8s-csi--node--driver--n4zdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n4zdp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b1cc5e3-6c35-4356-8831-57857e48a65e", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6", Pod:"csi-node-driver-n4zdp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2b2b9bb655", MAC:"3a:28:7e:62:e0:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:07.851951 containerd[1571]: 2025-10-28 23:16:07.848 [INFO][4559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" Namespace="calico-system" Pod="csi-node-driver-n4zdp" WorkloadEndpoint="localhost-k8s-csi--node--driver--n4zdp-eth0" Oct 28 23:16:07.868390 containerd[1571]: time="2025-10-28T23:16:07.868178673Z" level=info msg="connecting to shim 2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6" address="unix:///run/containerd/s/602f476c3d870de47f6a4a6f036b0e39f9b82feb1f0d22407be61b31b31dc7eb" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:16:07.898702 systemd[1]: Started cri-containerd-2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6.scope - libcontainer container 2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6. Oct 28 23:16:07.906873 kubelet[2719]: E1028 23:16:07.906838 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:07.907666 kubelet[2719]: E1028 23:16:07.907632 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bn5ff" podUID="573d9c02-82cb-4bf8-9f40-79127dc42465" Oct 28 23:16:07.908413 kubelet[2719]: E1028 23:16:07.908281 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" podUID="03d12470-929a-414d-b9fc-0eb2e9388b7a" Oct 28 23:16:07.918652 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:16:07.938269 containerd[1571]: time="2025-10-28T23:16:07.938170530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n4zdp,Uid:6b1cc5e3-6c35-4356-8831-57857e48a65e,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a270c920dae0a84f76d69a36002183a9d0805c2b4f60d8e479c35fee52b19d6\"" Oct 28 23:16:07.941544 containerd[1571]: time="2025-10-28T23:16:07.941510287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 23:16:07.959736 systemd-networkd[1485]: calif4bf2333d6a: Link UP Oct 28 23:16:07.960260 systemd-networkd[1485]: calif4bf2333d6a: Gained carrier Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.774 [INFO][4586] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--qtgwt-eth0 coredns-66bc5c9577- kube-system eddf9348-9a5f-4ac7-b557-fae5d1e3fcff 874 0 2025-10-28 23:15:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-qtgwt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif4bf2333d6a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Namespace="kube-system" Pod="coredns-66bc5c9577-qtgwt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qtgwt-" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.775 [INFO][4586] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Namespace="kube-system" Pod="coredns-66bc5c9577-qtgwt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.806 [INFO][4614] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" HandleID="k8s-pod-network.2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Workload="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.806 [INFO][4614] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" HandleID="k8s-pod-network.2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Workload="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2190), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-qtgwt", "timestamp":"2025-10-28 23:16:07.806571929 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.806 [INFO][4614] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.831 [INFO][4614] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.831 [INFO][4614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.909 [INFO][4614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" host="localhost" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.919 [INFO][4614] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.930 [INFO][4614] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.933 [INFO][4614] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.935 [INFO][4614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.936 [INFO][4614] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" host="localhost" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.938 [INFO][4614] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.944 [INFO][4614] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" host="localhost" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.949 [INFO][4614] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" host="localhost" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.950 [INFO][4614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" host="localhost" Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.950 [INFO][4614] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:16:07.978514 containerd[1571]: 2025-10-28 23:16:07.950 [INFO][4614] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" HandleID="k8s-pod-network.2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Workload="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" Oct 28 23:16:07.979043 containerd[1571]: 2025-10-28 23:16:07.956 [INFO][4586] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Namespace="kube-system" Pod="coredns-66bc5c9577-qtgwt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qtgwt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"eddf9348-9a5f-4ac7-b557-fae5d1e3fcff", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-qtgwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4bf2333d6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:07.979043 containerd[1571]: 2025-10-28 23:16:07.956 [INFO][4586] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Namespace="kube-system" Pod="coredns-66bc5c9577-qtgwt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" Oct 28 23:16:07.979043 containerd[1571]: 2025-10-28 23:16:07.957 [INFO][4586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4bf2333d6a ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Namespace="kube-system" Pod="coredns-66bc5c9577-qtgwt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" Oct 28 23:16:07.979043 containerd[1571]: 2025-10-28 23:16:07.960 [INFO][4586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Namespace="kube-system" Pod="coredns-66bc5c9577-qtgwt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" Oct 28 23:16:07.979043 containerd[1571]: 2025-10-28 23:16:07.962 [INFO][4586] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Namespace="kube-system" Pod="coredns-66bc5c9577-qtgwt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--qtgwt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"eddf9348-9a5f-4ac7-b557-fae5d1e3fcff", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f", Pod:"coredns-66bc5c9577-qtgwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4bf2333d6a", MAC:"4e:8e:95:e9:40:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:07.979043 containerd[1571]: 2025-10-28 23:16:07.976 [INFO][4586] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" Namespace="kube-system" Pod="coredns-66bc5c9577-qtgwt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--qtgwt-eth0" Oct 28 23:16:07.998490 containerd[1571]: time="2025-10-28T23:16:07.998415915Z" level=info msg="connecting to shim 2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f" address="unix:///run/containerd/s/d4c22cf60be23e1510b78890c064d4c62b3fd1313371281668276951abe60437" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:16:08.032831 systemd[1]: Started cri-containerd-2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f.scope - libcontainer container 2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f. Oct 28 23:16:08.045623 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:16:08.060811 systemd-networkd[1485]: calic115e3d175b: Link UP Oct 28 23:16:08.062956 systemd-networkd[1485]: calic115e3d175b: Gained carrier Oct 28 23:16:08.073243 containerd[1571]: time="2025-10-28T23:16:08.073205052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qtgwt,Uid:eddf9348-9a5f-4ac7-b557-fae5d1e3fcff,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f\"" Oct 28 23:16:08.074644 kubelet[2719]: E1028 23:16:08.074617 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:07.774 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0 calico-apiserver-8dd56995c- calico-apiserver e693546f-22c9-4f3e-b82e-1c2bd8d6de81 876 0 2025-10-28 23:15:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8dd56995c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8dd56995c-k8qt2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic115e3d175b [] [] }} ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-k8qt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:07.775 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-k8qt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:07.808 [INFO][4612] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" HandleID="k8s-pod-network.b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Workload="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:07.808 [INFO][4612] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" HandleID="k8s-pod-network.b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Workload="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000128290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8dd56995c-k8qt2", "timestamp":"2025-10-28 23:16:07.808019927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:07.808 [INFO][4612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:07.950 [INFO][4612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:07.950 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.007 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" host="localhost" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.017 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.033 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.037 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.039 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.040 [INFO][4612] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" host="localhost" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.042 [INFO][4612] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208 Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.047 [INFO][4612] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" host="localhost" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.054 [INFO][4612] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" host="localhost" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.055 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" host="localhost" Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.055 [INFO][4612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:16:08.081389 containerd[1571]: 2025-10-28 23:16:08.055 [INFO][4612] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" HandleID="k8s-pod-network.b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Workload="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" Oct 28 23:16:08.081979 containerd[1571]: 2025-10-28 23:16:08.057 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-k8qt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0", GenerateName:"calico-apiserver-8dd56995c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e693546f-22c9-4f3e-b82e-1c2bd8d6de81", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8dd56995c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8dd56995c-k8qt2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic115e3d175b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:08.081979 containerd[1571]: 2025-10-28 23:16:08.057 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-k8qt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" Oct 28 23:16:08.081979 containerd[1571]: 2025-10-28 23:16:08.057 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic115e3d175b ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-k8qt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" Oct 28 23:16:08.081979 containerd[1571]: 2025-10-28 23:16:08.064 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-k8qt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" Oct 28 23:16:08.081979 containerd[1571]: 2025-10-28 23:16:08.066 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-k8qt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0", GenerateName:"calico-apiserver-8dd56995c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e693546f-22c9-4f3e-b82e-1c2bd8d6de81", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8dd56995c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208", Pod:"calico-apiserver-8dd56995c-k8qt2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic115e3d175b", MAC:"26:af:68:b3:b5:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:16:08.081979 containerd[1571]: 2025-10-28 23:16:08.078 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" Namespace="calico-apiserver" Pod="calico-apiserver-8dd56995c-k8qt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--8dd56995c--k8qt2-eth0" Oct 28 23:16:08.082892 containerd[1571]: time="2025-10-28T23:16:08.082748964Z" level=info msg="CreateContainer within sandbox \"2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 23:16:08.095598 containerd[1571]: time="2025-10-28T23:16:08.095553593Z" level=info msg="Container ef14ff01eccf1bb2f6864f7a5945750237af16b9dcce241b697f74b3e4348429: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:16:08.101104 containerd[1571]: time="2025-10-28T23:16:08.100899069Z" level=info msg="connecting to shim b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208" address="unix:///run/containerd/s/2ab31e6b758e72b63d8a8db35db121b276b534f226d9c178244e050576267d76" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:16:08.103181 containerd[1571]: time="2025-10-28T23:16:08.103124467Z" level=info msg="CreateContainer within sandbox \"2c5802f30409dba1ca4b028a8f9a254985d8b3bb4a17f7e62454585c59ca7c8f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef14ff01eccf1bb2f6864f7a5945750237af16b9dcce241b697f74b3e4348429\"" Oct 28 23:16:08.104211 containerd[1571]: time="2025-10-28T23:16:08.104169906Z" level=info msg="StartContainer for \"ef14ff01eccf1bb2f6864f7a5945750237af16b9dcce241b697f74b3e4348429\"" Oct 28 23:16:08.106446 containerd[1571]: time="2025-10-28T23:16:08.106402104Z" level=info msg="connecting to shim ef14ff01eccf1bb2f6864f7a5945750237af16b9dcce241b697f74b3e4348429" address="unix:///run/containerd/s/d4c22cf60be23e1510b78890c064d4c62b3fd1313371281668276951abe60437" protocol=ttrpc version=3 Oct 28 23:16:08.132664 systemd[1]: Started cri-containerd-b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208.scope - libcontainer container b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208. Oct 28 23:16:08.134440 systemd[1]: Started cri-containerd-ef14ff01eccf1bb2f6864f7a5945750237af16b9dcce241b697f74b3e4348429.scope - libcontainer container ef14ff01eccf1bb2f6864f7a5945750237af16b9dcce241b697f74b3e4348429. Oct 28 23:16:08.147732 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:16:08.154730 containerd[1571]: time="2025-10-28T23:16:08.154688703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:08.155611 containerd[1571]: time="2025-10-28T23:16:08.155565582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 23:16:08.155611 containerd[1571]: time="2025-10-28T23:16:08.155597822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 23:16:08.155854 kubelet[2719]: E1028 23:16:08.155800 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:16:08.155895 kubelet[2719]: E1028 23:16:08.155865 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:16:08.155962 kubelet[2719]: E1028 23:16:08.155942 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-n4zdp_calico-system(6b1cc5e3-6c35-4356-8831-57857e48a65e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:08.157211 containerd[1571]: time="2025-10-28T23:16:08.157173461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 23:16:08.185093 containerd[1571]: time="2025-10-28T23:16:08.185040477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8dd56995c-k8qt2,Uid:e693546f-22c9-4f3e-b82e-1c2bd8d6de81,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b4242c025adc2b7b962439be45d9c958f3c2aaa3eba7cd4837d3dab1c6f93208\"" Oct 28 23:16:08.185322 containerd[1571]: time="2025-10-28T23:16:08.185299557Z" level=info msg="StartContainer for \"ef14ff01eccf1bb2f6864f7a5945750237af16b9dcce241b697f74b3e4348429\" returns successfully" Oct 28 23:16:08.915391 kubelet[2719]: E1028 23:16:08.914924 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:08.915391 kubelet[2719]: E1028 23:16:08.915118 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:08.929344 kubelet[2719]: I1028 23:16:08.928728 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qtgwt" podStartSLOduration=40.928710208 podStartE2EDuration="40.928710208s" podCreationTimestamp="2025-10-28 23:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:16:08.927218849 +0000 UTC m=+47.306182036" watchObservedRunningTime="2025-10-28 23:16:08.928710208 +0000 UTC m=+47.307673395" Oct 28 23:16:08.985510 containerd[1571]: time="2025-10-28T23:16:08.985463680Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:08.986501 containerd[1571]: time="2025-10-28T23:16:08.986456599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 23:16:08.986557 containerd[1571]: time="2025-10-28T23:16:08.986524559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 23:16:08.986766 kubelet[2719]: E1028 23:16:08.986687 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:16:08.986819 kubelet[2719]: E1028 23:16:08.986764 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:16:08.986958 kubelet[2719]: E1028 23:16:08.986930 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-n4zdp_calico-system(6b1cc5e3-6c35-4356-8831-57857e48a65e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:08.987014 kubelet[2719]: E1028 23:16:08.986985 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n4zdp" podUID="6b1cc5e3-6c35-4356-8831-57857e48a65e" Oct 28 23:16:08.987148 containerd[1571]: time="2025-10-28T23:16:08.987105079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:16:09.132571 systemd-networkd[1485]: calif2b2b9bb655: Gained IPv6LL Oct 28 23:16:09.175350 containerd[1571]: time="2025-10-28T23:16:09.175228929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:09.225444 containerd[1571]: time="2025-10-28T23:16:09.224823889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:16:09.225444 containerd[1571]: time="2025-10-28T23:16:09.225036729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:16:09.225594 kubelet[2719]: E1028 23:16:09.225222 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:16:09.225594 kubelet[2719]: E1028 23:16:09.225269 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:16:09.225594 kubelet[2719]: E1028 23:16:09.225395 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8dd56995c-k8qt2_calico-apiserver(e693546f-22c9-4f3e-b82e-1c2bd8d6de81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:09.225758 kubelet[2719]: E1028 23:16:09.225699 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8dd56995c-k8qt2" podUID="e693546f-22c9-4f3e-b82e-1c2bd8d6de81" Oct 28 23:16:09.324605 systemd-networkd[1485]: calic115e3d175b: Gained IPv6LL Oct 28 23:16:09.324925 systemd-networkd[1485]: calif4bf2333d6a: Gained IPv6LL Oct 28 23:16:09.701400 kubelet[2719]: I1028 23:16:09.701198 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 23:16:09.701837 kubelet[2719]: E1028 23:16:09.701818 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:09.813443 containerd[1571]: time="2025-10-28T23:16:09.813383662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94cc48291861ed7baf92914ea14167583393e879b50af378de4518adb65d69bf\" id:\"9ee212c2674c9441ed80c2bffb94606f78c4089f5cbcc561ccaa369a035f1338\" pid:4840 exited_at:{seconds:1761693369 nanos:813070063}" Oct 28 23:16:09.897257 containerd[1571]: time="2025-10-28T23:16:09.897207476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94cc48291861ed7baf92914ea14167583393e879b50af378de4518adb65d69bf\" id:\"a480718cf5826e6f521cdf24a0350a7b9f8ae0aec0645c79944ad619fcc181ac\" pid:4866 exited_at:{seconds:1761693369 nanos:896919556}" Oct 28 23:16:09.918520 kubelet[2719]: E1028 23:16:09.917236 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:09.918520 kubelet[2719]: E1028 23:16:09.918071 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:09.918864 kubelet[2719]: E1028 23:16:09.918847 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:09.919115 kubelet[2719]: E1028 23:16:09.919032 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8dd56995c-k8qt2" podUID="e693546f-22c9-4f3e-b82e-1c2bd8d6de81" Oct 28 23:16:09.922143 kubelet[2719]: E1028 23:16:09.921754 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n4zdp" podUID="6b1cc5e3-6c35-4356-8831-57857e48a65e" Oct 28 23:16:10.685332 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:49070.service - OpenSSH per-connection server daemon (10.0.0.1:49070). Oct 28 23:16:10.756377 sshd[4888]: Accepted publickey for core from 10.0.0.1 port 49070 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:10.758169 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:10.763554 systemd-logind[1547]: New session 11 of user core. Oct 28 23:16:10.773675 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 28 23:16:10.918708 sshd[4892]: Connection closed by 10.0.0.1 port 49070 Oct 28 23:16:10.919220 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:10.921123 kubelet[2719]: E1028 23:16:10.921076 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:10.928158 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:49070.service: Deactivated successfully. Oct 28 23:16:10.931394 systemd[1]: session-11.scope: Deactivated successfully. Oct 28 23:16:10.932847 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Oct 28 23:16:10.938107 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:49072.service - OpenSSH per-connection server daemon (10.0.0.1:49072). Oct 28 23:16:10.939270 systemd-logind[1547]: Removed session 11. Oct 28 23:16:10.999090 sshd[4906]: Accepted publickey for core from 10.0.0.1 port 49072 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:11.000754 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:11.004687 systemd-logind[1547]: New session 12 of user core. Oct 28 23:16:11.012555 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 28 23:16:11.154281 sshd[4910]: Connection closed by 10.0.0.1 port 49072 Oct 28 23:16:11.153802 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:11.164511 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:49072.service: Deactivated successfully. Oct 28 23:16:11.166887 systemd[1]: session-12.scope: Deactivated successfully. Oct 28 23:16:11.168614 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Oct 28 23:16:11.175865 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:49088.service - OpenSSH per-connection server daemon (10.0.0.1:49088). Oct 28 23:16:11.177161 systemd-logind[1547]: Removed session 12. Oct 28 23:16:11.234796 sshd[4921]: Accepted publickey for core from 10.0.0.1 port 49088 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:11.237108 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:11.242618 systemd-logind[1547]: New session 13 of user core. Oct 28 23:16:11.250594 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 28 23:16:11.347550 sshd[4925]: Connection closed by 10.0.0.1 port 49088 Oct 28 23:16:11.348042 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:11.351866 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:49088.service: Deactivated successfully. Oct 28 23:16:11.353750 systemd[1]: session-13.scope: Deactivated successfully. Oct 28 23:16:11.354578 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Oct 28 23:16:11.355760 systemd-logind[1547]: Removed session 13. Oct 28 23:16:11.922797 kubelet[2719]: E1028 23:16:11.922737 2719 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:16:14.706478 containerd[1571]: time="2025-10-28T23:16:14.705555921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 23:16:14.954943 containerd[1571]: time="2025-10-28T23:16:14.954901378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:14.975042 containerd[1571]: time="2025-10-28T23:16:14.974941766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 23:16:14.975042 containerd[1571]: time="2025-10-28T23:16:14.974991046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 23:16:14.975138 kubelet[2719]: E1028 23:16:14.975113 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:16:14.975484 kubelet[2719]: E1028 23:16:14.975155 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:16:14.975484 kubelet[2719]: E1028 23:16:14.975223 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c96dcf9f6-f22wj_calico-system(029c1d7b-85d2-40f9-a251-8e93ec6d00e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:14.976710 containerd[1571]: time="2025-10-28T23:16:14.976538245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 23:16:15.467753 containerd[1571]: time="2025-10-28T23:16:15.467705260Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:15.468797 containerd[1571]: time="2025-10-28T23:16:15.468746179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 23:16:15.468862 containerd[1571]: time="2025-10-28T23:16:15.468831219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 23:16:15.468984 kubelet[2719]: E1028 23:16:15.468945 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:16:15.469027 kubelet[2719]: E1028 23:16:15.468990 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:16:15.469078 kubelet[2719]: E1028 23:16:15.469059 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c96dcf9f6-f22wj_calico-system(029c1d7b-85d2-40f9-a251-8e93ec6d00e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:15.469136 kubelet[2719]: E1028 23:16:15.469108 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c96dcf9f6-f22wj" podUID="029c1d7b-85d2-40f9-a251-8e93ec6d00e8" Oct 28 23:16:16.366761 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:49092.service - OpenSSH per-connection server daemon (10.0.0.1:49092). Oct 28 23:16:16.421260 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 49092 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:16.424197 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:16.429501 systemd-logind[1547]: New session 14 of user core. Oct 28 23:16:16.437592 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 28 23:16:16.531093 sshd[4950]: Connection closed by 10.0.0.1 port 49092 Oct 28 23:16:16.533672 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:16.540276 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:49092.service: Deactivated successfully. Oct 28 23:16:16.543153 systemd[1]: session-14.scope: Deactivated successfully. Oct 28 23:16:16.544942 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Oct 28 23:16:16.546881 systemd-logind[1547]: Removed session 14. Oct 28 23:16:16.548973 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:49108.service - OpenSSH per-connection server daemon (10.0.0.1:49108). Oct 28 23:16:16.614315 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 49108 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:16.615550 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:16.620028 systemd-logind[1547]: New session 15 of user core. Oct 28 23:16:16.630596 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 28 23:16:16.945911 sshd[4967]: Connection closed by 10.0.0.1 port 49108 Oct 28 23:16:16.946303 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:16.956488 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:49108.service: Deactivated successfully. Oct 28 23:16:16.960611 systemd[1]: session-15.scope: Deactivated successfully. Oct 28 23:16:16.961307 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Oct 28 23:16:16.964039 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:49110.service - OpenSSH per-connection server daemon (10.0.0.1:49110). Oct 28 23:16:16.964737 systemd-logind[1547]: Removed session 15. Oct 28 23:16:17.016149 sshd[4979]: Accepted publickey for core from 10.0.0.1 port 49110 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:17.017566 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:17.023259 systemd-logind[1547]: New session 16 of user core. Oct 28 23:16:17.031100 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 28 23:16:17.609098 sshd[4984]: Connection closed by 10.0.0.1 port 49110 Oct 28 23:16:17.609536 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:17.618838 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:49110.service: Deactivated successfully. Oct 28 23:16:17.620765 systemd[1]: session-16.scope: Deactivated successfully. Oct 28 23:16:17.621790 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Oct 28 23:16:17.627755 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:49122.service - OpenSSH per-connection server daemon (10.0.0.1:49122). Oct 28 23:16:17.628514 systemd-logind[1547]: Removed session 16. Oct 28 23:16:17.687283 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 49122 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:17.688779 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:17.693498 systemd-logind[1547]: New session 17 of user core. Oct 28 23:16:17.704615 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 28 23:16:17.926410 sshd[5006]: Connection closed by 10.0.0.1 port 49122 Oct 28 23:16:17.926792 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:17.937639 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:49122.service: Deactivated successfully. Oct 28 23:16:17.939898 systemd[1]: session-17.scope: Deactivated successfully. Oct 28 23:16:17.942058 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Oct 28 23:16:17.947500 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:49124.service - OpenSSH per-connection server daemon (10.0.0.1:49124). Oct 28 23:16:17.948562 systemd-logind[1547]: Removed session 17. Oct 28 23:16:18.007395 sshd[5017]: Accepted publickey for core from 10.0.0.1 port 49124 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:18.009333 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:18.014492 systemd-logind[1547]: New session 18 of user core. Oct 28 23:16:18.020555 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 28 23:16:18.113633 sshd[5021]: Connection closed by 10.0.0.1 port 49124 Oct 28 23:16:18.113940 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:18.118098 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:49124.service: Deactivated successfully. Oct 28 23:16:18.120904 systemd[1]: session-18.scope: Deactivated successfully. Oct 28 23:16:18.122286 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Oct 28 23:16:18.124473 systemd-logind[1547]: Removed session 18. Oct 28 23:16:18.705751 containerd[1571]: time="2025-10-28T23:16:18.705716842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 23:16:18.891610 containerd[1571]: time="2025-10-28T23:16:18.891555319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:18.892477 containerd[1571]: time="2025-10-28T23:16:18.892422599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 23:16:18.892539 containerd[1571]: time="2025-10-28T23:16:18.892457559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 23:16:18.892693 kubelet[2719]: E1028 23:16:18.892658 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:16:18.892979 kubelet[2719]: E1028 23:16:18.892702 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:16:18.892979 kubelet[2719]: E1028 23:16:18.892779 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bn5ff_calico-system(573d9c02-82cb-4bf8-9f40-79127dc42465): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:18.892979 kubelet[2719]: E1028 23:16:18.892812 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bn5ff" podUID="573d9c02-82cb-4bf8-9f40-79127dc42465" Oct 28 23:16:19.704727 containerd[1571]: time="2025-10-28T23:16:19.704680298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:16:19.944150 containerd[1571]: time="2025-10-28T23:16:19.944078878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:19.945024 containerd[1571]: time="2025-10-28T23:16:19.944990918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:16:19.945092 containerd[1571]: time="2025-10-28T23:16:19.945026398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:16:19.945179 kubelet[2719]: E1028 23:16:19.945148 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:16:19.945384 kubelet[2719]: E1028 23:16:19.945185 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:16:19.945384 kubelet[2719]: E1028 23:16:19.945257 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8dd56995c-pndrt_calico-apiserver(84a92fac-14cc-4b8a-a065-7ef0df05e34f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:19.945384 kubelet[2719]: E1028 23:16:19.945321 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" podUID="84a92fac-14cc-4b8a-a065-7ef0df05e34f" Oct 28 23:16:21.705304 containerd[1571]: time="2025-10-28T23:16:21.705145087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 23:16:21.911651 containerd[1571]: time="2025-10-28T23:16:21.911539651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:21.912610 containerd[1571]: time="2025-10-28T23:16:21.912552371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 23:16:21.912610 containerd[1571]: time="2025-10-28T23:16:21.912554091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 23:16:21.912936 kubelet[2719]: E1028 23:16:21.912868 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:16:21.912936 kubelet[2719]: E1028 23:16:21.912926 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:16:21.913250 kubelet[2719]: E1028 23:16:21.913104 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-cf76cbd9f-lbvzw_calico-system(03d12470-929a-414d-b9fc-0eb2e9388b7a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:21.913250 kubelet[2719]: E1028 23:16:21.913157 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" podUID="03d12470-929a-414d-b9fc-0eb2e9388b7a" Oct 28 23:16:21.914455 containerd[1571]: time="2025-10-28T23:16:21.913749611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 23:16:22.120919 containerd[1571]: time="2025-10-28T23:16:22.120864298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:22.121897 containerd[1571]: time="2025-10-28T23:16:22.121852937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 23:16:22.121957 containerd[1571]: time="2025-10-28T23:16:22.121924297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 23:16:22.122139 kubelet[2719]: E1028 23:16:22.122076 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:16:22.122139 kubelet[2719]: E1028 23:16:22.122138 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:16:22.122231 kubelet[2719]: E1028 23:16:22.122208 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-n4zdp_calico-system(6b1cc5e3-6c35-4356-8831-57857e48a65e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:22.123040 containerd[1571]: time="2025-10-28T23:16:22.122994577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 23:16:22.340452 containerd[1571]: time="2025-10-28T23:16:22.340368702Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:22.341402 containerd[1571]: time="2025-10-28T23:16:22.341352062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 23:16:22.341465 containerd[1571]: time="2025-10-28T23:16:22.341413462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 23:16:22.341623 kubelet[2719]: E1028 23:16:22.341583 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:16:22.341674 kubelet[2719]: E1028 23:16:22.341635 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:16:22.341733 kubelet[2719]: E1028 23:16:22.341710 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-n4zdp_calico-system(6b1cc5e3-6c35-4356-8831-57857e48a65e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:22.341804 kubelet[2719]: E1028 23:16:22.341757 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n4zdp" podUID="6b1cc5e3-6c35-4356-8831-57857e48a65e" Oct 28 23:16:23.129106 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:49328.service - OpenSSH per-connection server daemon (10.0.0.1:49328). Oct 28 23:16:23.183157 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 49328 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:23.186317 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:23.194113 systemd-logind[1547]: New session 19 of user core. Oct 28 23:16:23.203643 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 28 23:16:23.305476 sshd[5050]: Connection closed by 10.0.0.1 port 49328 Oct 28 23:16:23.305682 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:23.309730 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:49328.service: Deactivated successfully. Oct 28 23:16:23.311457 systemd[1]: session-19.scope: Deactivated successfully. Oct 28 23:16:23.312208 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Oct 28 23:16:23.313502 systemd-logind[1547]: Removed session 19. Oct 28 23:16:24.704819 containerd[1571]: time="2025-10-28T23:16:24.704780742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:16:24.906831 containerd[1571]: time="2025-10-28T23:16:24.906714122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:16:24.907802 containerd[1571]: time="2025-10-28T23:16:24.907695961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:16:24.907802 containerd[1571]: time="2025-10-28T23:16:24.907769921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:16:24.907957 kubelet[2719]: E1028 23:16:24.907920 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:16:24.908248 kubelet[2719]: E1028 23:16:24.907964 2719 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:16:24.908248 kubelet[2719]: E1028 23:16:24.908043 2719 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8dd56995c-k8qt2_calico-apiserver(e693546f-22c9-4f3e-b82e-1c2bd8d6de81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:16:24.908248 kubelet[2719]: E1028 23:16:24.908105 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8dd56995c-k8qt2" podUID="e693546f-22c9-4f3e-b82e-1c2bd8d6de81" Oct 28 23:16:28.320046 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:49340.service - OpenSSH per-connection server daemon (10.0.0.1:49340). Oct 28 23:16:28.384718 sshd[5068]: Accepted publickey for core from 10.0.0.1 port 49340 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:28.386558 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:28.391078 systemd-logind[1547]: New session 20 of user core. Oct 28 23:16:28.400626 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 28 23:16:28.502740 sshd[5072]: Connection closed by 10.0.0.1 port 49340 Oct 28 23:16:28.502953 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:28.507778 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:49340.service: Deactivated successfully. Oct 28 23:16:28.512107 systemd[1]: session-20.scope: Deactivated successfully. Oct 28 23:16:28.512954 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Oct 28 23:16:28.514246 systemd-logind[1547]: Removed session 20. Oct 28 23:16:29.706487 kubelet[2719]: E1028 23:16:29.706402 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c96dcf9f6-f22wj" podUID="029c1d7b-85d2-40f9-a251-8e93ec6d00e8" Oct 28 23:16:31.704836 kubelet[2719]: E1028 23:16:31.704746 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bn5ff" podUID="573d9c02-82cb-4bf8-9f40-79127dc42465" Oct 28 23:16:32.704964 kubelet[2719]: E1028 23:16:32.704846 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cf76cbd9f-lbvzw" podUID="03d12470-929a-414d-b9fc-0eb2e9388b7a" Oct 28 23:16:33.514681 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:48444.service - OpenSSH per-connection server daemon (10.0.0.1:48444). Oct 28 23:16:33.584821 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 48444 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:16:33.586536 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:16:33.590532 systemd-logind[1547]: New session 21 of user core. Oct 28 23:16:33.605579 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 28 23:16:33.707095 kubelet[2719]: E1028 23:16:33.707043 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8dd56995c-pndrt" podUID="84a92fac-14cc-4b8a-a065-7ef0df05e34f" Oct 28 23:16:33.708696 kubelet[2719]: E1028 23:16:33.708647 2719 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n4zdp" podUID="6b1cc5e3-6c35-4356-8831-57857e48a65e" Oct 28 23:16:33.741917 sshd[5091]: Connection closed by 10.0.0.1 port 48444 Oct 28 23:16:33.742244 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Oct 28 23:16:33.746123 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:48444.service: Deactivated successfully. Oct 28 23:16:33.748220 systemd[1]: session-21.scope: Deactivated successfully. Oct 28 23:16:33.749016 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Oct 28 23:16:33.749966 systemd-logind[1547]: Removed session 21.