Oct 31 13:49:42.250218 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 31 13:49:42.250239 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Oct 31 12:15:30 -00 2025 Oct 31 13:49:42.250248 kernel: KASLR enabled Oct 31 13:49:42.250254 kernel: efi: EFI v2.7 by EDK II Oct 31 13:49:42.250260 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 31 13:49:42.250266 kernel: random: crng init done Oct 31 13:49:42.250288 kernel: secureboot: Secure boot disabled Oct 31 13:49:42.250295 kernel: ACPI: Early table checksum verification disabled Oct 31 13:49:42.250304 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 31 13:49:42.250310 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 31 13:49:42.250316 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:49:42.250322 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:49:42.250337 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:49:42.250344 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:49:42.250357 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:49:42.250364 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:49:42.250370 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:49:42.250378 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:49:42.250385 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 13:49:42.250391 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 31 13:49:42.250398 kernel: ACPI: Use ACPI SPCR as default console: No Oct 31 13:49:42.250404 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 13:49:42.250412 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 31 13:49:42.250419 kernel: Zone ranges: Oct 31 13:49:42.250425 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 13:49:42.250432 kernel: DMA32 empty Oct 31 13:49:42.250438 kernel: Normal empty Oct 31 13:49:42.250445 kernel: Device empty Oct 31 13:49:42.250451 kernel: Movable zone start for each node Oct 31 13:49:42.250459 kernel: Early memory node ranges Oct 31 13:49:42.250466 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 31 13:49:42.250473 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 31 13:49:42.250479 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 31 13:49:42.250486 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 31 13:49:42.250495 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 31 13:49:42.250504 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 31 13:49:42.250513 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 31 13:49:42.250520 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 31 13:49:42.250526 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 31 13:49:42.250532 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 31 13:49:42.250543 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 31 13:49:42.250550 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 31 13:49:42.250557 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 31 13:49:42.250566 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 13:49:42.250574 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 31 13:49:42.250581 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 31 13:49:42.250588 kernel: psci: probing for conduit method from ACPI. Oct 31 13:49:42.250595 kernel: psci: PSCIv1.1 detected in firmware. Oct 31 13:49:42.250605 kernel: psci: Using standard PSCI v0.2 function IDs Oct 31 13:49:42.250612 kernel: psci: Trusted OS migration not required Oct 31 13:49:42.250621 kernel: psci: SMC Calling Convention v1.1 Oct 31 13:49:42.250637 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 31 13:49:42.250645 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 31 13:49:42.250655 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 31 13:49:42.250662 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 31 13:49:42.250675 kernel: Detected PIPT I-cache on CPU0 Oct 31 13:49:42.250682 kernel: CPU features: detected: GIC system register CPU interface Oct 31 13:49:42.250689 kernel: CPU features: detected: Spectre-v4 Oct 31 13:49:42.250696 kernel: CPU features: detected: Spectre-BHB Oct 31 13:49:42.250704 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 31 13:49:42.250711 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 31 13:49:42.250718 kernel: CPU features: detected: ARM erratum 1418040 Oct 31 13:49:42.250725 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 31 13:49:42.250731 kernel: alternatives: applying boot alternatives Oct 31 13:49:42.250739 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cc520f2d13274355d865d6b74d46b5152253502842541152122d42de9e5fecb2 Oct 31 13:49:42.250746 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 13:49:42.250753 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 13:49:42.250760 kernel: Fallback order for Node 0: 0 Oct 31 13:49:42.250767 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 31 13:49:42.250775 kernel: Policy zone: DMA Oct 31 13:49:42.250781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 13:49:42.250788 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 31 13:49:42.250795 kernel: software IO TLB: area num 4. Oct 31 13:49:42.250802 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 31 13:49:42.250808 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 31 13:49:42.250815 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 13:49:42.250822 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 13:49:42.250829 kernel: rcu: RCU event tracing is enabled. Oct 31 13:49:42.250836 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 13:49:42.250843 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 13:49:42.250852 kernel: Tracing variant of Tasks RCU enabled. Oct 31 13:49:42.250859 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 13:49:42.250865 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 13:49:42.250872 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 13:49:42.250879 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 13:49:42.250886 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 31 13:49:42.250893 kernel: GICv3: 256 SPIs implemented Oct 31 13:49:42.250900 kernel: GICv3: 0 Extended SPIs implemented Oct 31 13:49:42.250906 kernel: Root IRQ handler: gic_handle_irq Oct 31 13:49:42.250913 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 31 13:49:42.250920 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 31 13:49:42.250928 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 31 13:49:42.250935 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 31 13:49:42.250942 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 31 13:49:42.250949 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 31 13:49:42.250955 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 31 13:49:42.250962 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 31 13:49:42.250969 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 13:49:42.250976 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 13:49:42.250983 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 31 13:49:42.250990 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 31 13:49:42.250997 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 31 13:49:42.251005 kernel: arm-pv: using stolen time PV Oct 31 13:49:42.251012 kernel: Console: colour dummy device 80x25 Oct 31 13:49:42.251020 kernel: ACPI: Core revision 20240827 Oct 31 13:49:42.251027 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 31 13:49:42.251034 kernel: pid_max: default: 32768 minimum: 301 Oct 31 13:49:42.251041 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 31 13:49:42.251048 kernel: landlock: Up and running. Oct 31 13:49:42.251055 kernel: SELinux: Initializing. Oct 31 13:49:42.251063 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 13:49:42.251070 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 13:49:42.251078 kernel: rcu: Hierarchical SRCU implementation. Oct 31 13:49:42.251085 kernel: rcu: Max phase no-delay instances is 400. Oct 31 13:49:42.251092 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 31 13:49:42.251099 kernel: Remapping and enabling EFI services. Oct 31 13:49:42.251106 kernel: smp: Bringing up secondary CPUs ... Oct 31 13:49:42.251115 kernel: Detected PIPT I-cache on CPU1 Oct 31 13:49:42.251126 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 31 13:49:42.251135 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 31 13:49:42.251143 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 13:49:42.251150 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 31 13:49:42.251158 kernel: Detected PIPT I-cache on CPU2 Oct 31 13:49:42.251165 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 31 13:49:42.251174 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 31 13:49:42.251182 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 13:49:42.251189 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 31 13:49:42.251196 kernel: Detected PIPT I-cache on CPU3 Oct 31 13:49:42.251204 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 31 13:49:42.251212 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 31 13:49:42.251220 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 13:49:42.251228 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 31 13:49:42.251236 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 13:49:42.251243 kernel: SMP: Total of 4 processors activated. Oct 31 13:49:42.251251 kernel: CPU: All CPU(s) started at EL1 Oct 31 13:49:42.251258 kernel: CPU features: detected: 32-bit EL0 Support Oct 31 13:49:42.251266 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 31 13:49:42.251280 kernel: CPU features: detected: Common not Private translations Oct 31 13:49:42.251290 kernel: CPU features: detected: CRC32 instructions Oct 31 13:49:42.251297 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 31 13:49:42.251317 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 31 13:49:42.251331 kernel: CPU features: detected: LSE atomic instructions Oct 31 13:49:42.251340 kernel: CPU features: detected: Privileged Access Never Oct 31 13:49:42.251347 kernel: CPU features: detected: RAS Extension Support Oct 31 13:49:42.251355 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 31 13:49:42.251362 kernel: alternatives: applying system-wide alternatives Oct 31 13:49:42.251372 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 31 13:49:42.251380 kernel: Memory: 2451104K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12288K init, 1038K bss, 98848K reserved, 16384K cma-reserved) Oct 31 13:49:42.251388 kernel: devtmpfs: initialized Oct 31 13:49:42.251396 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 13:49:42.251403 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 13:49:42.251411 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 31 13:49:42.251418 kernel: 0 pages in range for non-PLT usage Oct 31 13:49:42.251427 kernel: 515232 pages in range for PLT usage Oct 31 13:49:42.251434 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 13:49:42.251442 kernel: SMBIOS 3.0.0 present. Oct 31 13:49:42.251449 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 31 13:49:42.251456 kernel: DMI: Memory slots populated: 1/1 Oct 31 13:49:42.251464 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 13:49:42.251471 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 31 13:49:42.251480 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 31 13:49:42.251488 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 31 13:49:42.251495 kernel: audit: initializing netlink subsys (disabled) Oct 31 13:49:42.251503 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Oct 31 13:49:42.251510 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 13:49:42.251518 kernel: cpuidle: using governor menu Oct 31 13:49:42.251525 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 31 13:49:42.251534 kernel: ASID allocator initialised with 32768 entries Oct 31 13:49:42.251541 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 13:49:42.251549 kernel: Serial: AMBA PL011 UART driver Oct 31 13:49:42.251556 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 13:49:42.251564 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 13:49:42.251571 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 31 13:49:42.251579 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 31 13:49:42.251586 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 13:49:42.251595 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 13:49:42.251602 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 31 13:49:42.251609 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 31 13:49:42.251617 kernel: ACPI: Added _OSI(Module Device) Oct 31 13:49:42.251624 kernel: ACPI: Added _OSI(Processor Device) Oct 31 13:49:42.251632 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 13:49:42.251639 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 13:49:42.251648 kernel: ACPI: Interpreter enabled Oct 31 13:49:42.251655 kernel: ACPI: Using GIC for interrupt routing Oct 31 13:49:42.251663 kernel: ACPI: MCFG table detected, 1 entries Oct 31 13:49:42.251670 kernel: ACPI: CPU0 has been hot-added Oct 31 13:49:42.251678 kernel: ACPI: CPU1 has been hot-added Oct 31 13:49:42.251685 kernel: ACPI: CPU2 has been hot-added Oct 31 13:49:42.251692 kernel: ACPI: CPU3 has been hot-added Oct 31 13:49:42.251700 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 31 13:49:42.251708 kernel: printk: legacy console [ttyAMA0] enabled Oct 31 13:49:42.251716 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 13:49:42.251869 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 13:49:42.251956 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 31 13:49:42.252036 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 31 13:49:42.252116 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 31 13:49:42.252194 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 31 13:49:42.252204 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 31 13:49:42.252211 kernel: PCI host bridge to bus 0000:00 Oct 31 13:49:42.252319 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 31 13:49:42.252408 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 31 13:49:42.252484 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 31 13:49:42.252566 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 13:49:42.252661 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 31 13:49:42.252749 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 31 13:49:42.252833 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 31 13:49:42.252910 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 31 13:49:42.252991 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 31 13:49:42.253067 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 31 13:49:42.253145 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 31 13:49:42.253223 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 31 13:49:42.253321 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 31 13:49:42.253407 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 31 13:49:42.253482 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 31 13:49:42.253492 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 31 13:49:42.253500 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 31 13:49:42.253508 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 31 13:49:42.253515 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 31 13:49:42.253523 kernel: iommu: Default domain type: Translated Oct 31 13:49:42.253532 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 31 13:49:42.253540 kernel: efivars: Registered efivars operations Oct 31 13:49:42.253548 kernel: vgaarb: loaded Oct 31 13:49:42.253555 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 31 13:49:42.253563 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 13:49:42.253570 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 13:49:42.253578 kernel: pnp: PnP ACPI init Oct 31 13:49:42.253669 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 31 13:49:42.253681 kernel: pnp: PnP ACPI: found 1 devices Oct 31 13:49:42.253689 kernel: NET: Registered PF_INET protocol family Oct 31 13:49:42.253696 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 13:49:42.253704 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 13:49:42.253712 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 13:49:42.253719 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 13:49:42.253729 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 31 13:49:42.253737 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 13:49:42.253744 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 13:49:42.253752 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 13:49:42.253760 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 13:49:42.253767 kernel: PCI: CLS 0 bytes, default 64 Oct 31 13:49:42.253775 kernel: kvm [1]: HYP mode not available Oct 31 13:49:42.253784 kernel: Initialise system trusted keyrings Oct 31 13:49:42.253791 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 13:49:42.253799 kernel: Key type asymmetric registered Oct 31 13:49:42.253806 kernel: Asymmetric key parser 'x509' registered Oct 31 13:49:42.253814 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 31 13:49:42.253821 kernel: io scheduler mq-deadline registered Oct 31 13:49:42.253829 kernel: io scheduler kyber registered Oct 31 13:49:42.253837 kernel: io scheduler bfq registered Oct 31 13:49:42.253845 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 31 13:49:42.253852 kernel: ACPI: button: Power Button [PWRB] Oct 31 13:49:42.253860 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 31 13:49:42.253939 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 31 13:49:42.253948 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 13:49:42.253956 kernel: thunder_xcv, ver 1.0 Oct 31 13:49:42.253965 kernel: thunder_bgx, ver 1.0 Oct 31 13:49:42.253972 kernel: nicpf, ver 1.0 Oct 31 13:49:42.253980 kernel: nicvf, ver 1.0 Oct 31 13:49:42.254066 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 31 13:49:42.254141 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-31T13:49:41 UTC (1761918581) Oct 31 13:49:42.254151 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 31 13:49:42.254159 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 31 13:49:42.254168 kernel: watchdog: NMI not fully supported Oct 31 13:49:42.254175 kernel: watchdog: Hard watchdog permanently disabled Oct 31 13:49:42.254183 kernel: NET: Registered PF_INET6 protocol family Oct 31 13:49:42.254190 kernel: Segment Routing with IPv6 Oct 31 13:49:42.254197 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 13:49:42.254205 kernel: NET: Registered PF_PACKET protocol family Oct 31 13:49:42.254212 kernel: Key type dns_resolver registered Oct 31 13:49:42.254221 kernel: registered taskstats version 1 Oct 31 13:49:42.254228 kernel: Loading compiled-in X.509 certificates Oct 31 13:49:42.254236 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 64cdd3ce1e781c447f31e2db38e6b9c169999a49' Oct 31 13:49:42.254244 kernel: Demotion targets for Node 0: null Oct 31 13:49:42.254251 kernel: Key type .fscrypt registered Oct 31 13:49:42.254259 kernel: Key type fscrypt-provisioning registered Oct 31 13:49:42.254266 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 13:49:42.254289 kernel: ima: Allocated hash algorithm: sha1 Oct 31 13:49:42.254300 kernel: ima: No architecture policies found Oct 31 13:49:42.254308 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 31 13:49:42.254315 kernel: clk: Disabling unused clocks Oct 31 13:49:42.254323 kernel: PM: genpd: Disabling unused power domains Oct 31 13:49:42.254337 kernel: Freeing unused kernel memory: 12288K Oct 31 13:49:42.254345 kernel: Run /init as init process Oct 31 13:49:42.254357 kernel: with arguments: Oct 31 13:49:42.254364 kernel: /init Oct 31 13:49:42.254372 kernel: with environment: Oct 31 13:49:42.254383 kernel: HOME=/ Oct 31 13:49:42.254393 kernel: TERM=linux Oct 31 13:49:42.254528 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 31 13:49:42.254612 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 31 13:49:42.254626 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 13:49:42.254634 kernel: GPT:16515071 != 27000831 Oct 31 13:49:42.254641 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 13:49:42.254649 kernel: GPT:16515071 != 27000831 Oct 31 13:49:42.254656 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 13:49:42.254664 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 13:49:42.254673 kernel: SCSI subsystem initialized Oct 31 13:49:42.254681 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 13:49:42.254689 kernel: device-mapper: uevent: version 1.0.3 Oct 31 13:49:42.254696 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 31 13:49:42.254704 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 31 13:49:42.254711 kernel: raid6: neonx8 gen() 15763 MB/s Oct 31 13:49:42.254719 kernel: raid6: neonx4 gen() 15676 MB/s Oct 31 13:49:42.254727 kernel: raid6: neonx2 gen() 13160 MB/s Oct 31 13:49:42.254735 kernel: raid6: neonx1 gen() 10406 MB/s Oct 31 13:49:42.254742 kernel: raid6: int64x8 gen() 6795 MB/s Oct 31 13:49:42.254750 kernel: raid6: int64x4 gen() 7319 MB/s Oct 31 13:49:42.254757 kernel: raid6: int64x2 gen() 6084 MB/s Oct 31 13:49:42.254765 kernel: raid6: int64x1 gen() 5039 MB/s Oct 31 13:49:42.254773 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s Oct 31 13:49:42.254780 kernel: raid6: .... xor() 11978 MB/s, rmw enabled Oct 31 13:49:42.254789 kernel: raid6: using neon recovery algorithm Oct 31 13:49:42.254796 kernel: xor: measuring software checksum speed Oct 31 13:49:42.254804 kernel: 8regs : 21641 MB/sec Oct 31 13:49:42.254811 kernel: 32regs : 21658 MB/sec Oct 31 13:49:42.254819 kernel: arm64_neon : 23388 MB/sec Oct 31 13:49:42.254826 kernel: xor: using function: arm64_neon (23388 MB/sec) Oct 31 13:49:42.254834 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 13:49:42.254843 kernel: BTRFS: device fsid 2e48a6cc-4be7-468d-abbe-613184ca2d09 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (206) Oct 31 13:49:42.254850 kernel: BTRFS info (device dm-0): first mount of filesystem 2e48a6cc-4be7-468d-abbe-613184ca2d09 Oct 31 13:49:42.254858 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 31 13:49:42.254866 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 13:49:42.254874 kernel: BTRFS info (device dm-0): enabling free space tree Oct 31 13:49:42.254881 kernel: loop: module loaded Oct 31 13:49:42.254889 kernel: loop0: detected capacity change from 0 to 91464 Oct 31 13:49:42.254897 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 13:49:42.254906 systemd[1]: Successfully made /usr/ read-only. Oct 31 13:49:42.254917 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 13:49:42.254925 systemd[1]: Detected virtualization kvm. Oct 31 13:49:42.254933 systemd[1]: Detected architecture arm64. Oct 31 13:49:42.254941 systemd[1]: Running in initrd. Oct 31 13:49:42.254950 systemd[1]: No hostname configured, using default hostname. Oct 31 13:49:42.254959 systemd[1]: Hostname set to . Oct 31 13:49:42.254966 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 31 13:49:42.254975 systemd[1]: Queued start job for default target initrd.target. Oct 31 13:49:42.254983 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 13:49:42.254991 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 13:49:42.255001 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 13:49:42.255009 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 13:49:42.255018 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 13:49:42.255027 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 13:49:42.255041 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 13:49:42.255050 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 13:49:42.255061 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 13:49:42.255069 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 31 13:49:42.255077 systemd[1]: Reached target paths.target - Path Units. Oct 31 13:49:42.255085 systemd[1]: Reached target slices.target - Slice Units. Oct 31 13:49:42.255093 systemd[1]: Reached target swap.target - Swaps. Oct 31 13:49:42.255101 systemd[1]: Reached target timers.target - Timer Units. Oct 31 13:49:42.255109 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 13:49:42.255119 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 13:49:42.255129 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 13:49:42.255143 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 31 13:49:42.255159 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 13:49:42.255168 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 13:49:42.255178 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 13:49:42.255186 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 13:49:42.255195 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 13:49:42.255203 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 13:49:42.255212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 13:49:42.255220 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 13:49:42.255230 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 31 13:49:42.255239 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 13:49:42.255247 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 13:49:42.255255 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 13:49:42.255264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 13:49:42.255285 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 13:49:42.255304 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 13:49:42.255313 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 13:49:42.255322 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 13:49:42.255391 systemd-journald[345]: Collecting audit messages is disabled. Oct 31 13:49:42.255415 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 13:49:42.255423 kernel: Bridge firewalling registered Oct 31 13:49:42.255432 systemd-journald[345]: Journal started Oct 31 13:49:42.255450 systemd-journald[345]: Runtime Journal (/run/log/journal/7b898b1ce63a41c89514e23802b32e14) is 6M, max 48.5M, 42.4M free. Oct 31 13:49:42.255093 systemd-modules-load[348]: Inserted module 'br_netfilter' Oct 31 13:49:42.259083 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 13:49:42.259894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 13:49:42.263963 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 13:49:42.265632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 13:49:42.272726 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 13:49:42.276379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:49:42.279388 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 13:49:42.280098 systemd-tmpfiles[364]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 31 13:49:42.282429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 13:49:42.285213 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 13:49:42.291678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 13:49:42.296020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 13:49:42.300813 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 13:49:42.311340 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 13:49:42.313607 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 13:49:42.334097 systemd-resolved[376]: Positive Trust Anchors: Oct 31 13:49:42.334115 systemd-resolved[376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 13:49:42.334119 systemd-resolved[376]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 13:49:42.334149 systemd-resolved[376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 13:49:42.346033 dracut-cmdline[389]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cc520f2d13274355d865d6b74d46b5152253502842541152122d42de9e5fecb2 Oct 31 13:49:42.358697 systemd-resolved[376]: Defaulting to hostname 'linux'. Oct 31 13:49:42.359619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 13:49:42.360825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 13:49:42.415298 kernel: Loading iSCSI transport class v2.0-870. Oct 31 13:49:42.423294 kernel: iscsi: registered transport (tcp) Oct 31 13:49:42.436328 kernel: iscsi: registered transport (qla4xxx) Oct 31 13:49:42.436393 kernel: QLogic iSCSI HBA Driver Oct 31 13:49:42.455851 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 13:49:42.480189 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 13:49:42.481814 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 13:49:42.526213 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 13:49:42.528601 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 13:49:42.530206 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 13:49:42.568153 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 13:49:42.570742 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 13:49:42.598709 systemd-udevd[624]: Using default interface naming scheme 'v257'. Oct 31 13:49:42.606192 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 13:49:42.610768 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 13:49:42.632429 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 13:49:42.633738 dracut-pre-trigger[700]: rd.md=0: removing MD RAID activation Oct 31 13:49:42.636722 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 13:49:42.657270 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 13:49:42.660725 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 13:49:42.674797 systemd-networkd[741]: lo: Link UP Oct 31 13:49:42.674804 systemd-networkd[741]: lo: Gained carrier Oct 31 13:49:42.676456 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 13:49:42.677579 systemd[1]: Reached target network.target - Network. Oct 31 13:49:42.710673 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 13:49:42.713355 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 13:49:42.764538 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 13:49:42.777731 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 13:49:42.786874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 13:49:42.794952 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 13:49:42.797048 systemd-networkd[741]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 13:49:42.797051 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 13:49:42.797718 systemd-networkd[741]: eth0: Link UP Oct 31 13:49:42.797922 systemd-networkd[741]: eth0: Gained carrier Oct 31 13:49:42.797932 systemd-networkd[741]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 13:49:42.801499 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 13:49:42.803640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 13:49:42.803742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:49:42.805026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 13:49:42.822930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 13:49:42.824337 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 13:49:42.828998 disk-uuid[806]: Primary Header is updated. Oct 31 13:49:42.828998 disk-uuid[806]: Secondary Entries is updated. Oct 31 13:49:42.828998 disk-uuid[806]: Secondary Header is updated. Oct 31 13:49:42.842395 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 13:49:42.850792 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 13:49:42.852138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 13:49:42.857510 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 13:49:42.861353 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 13:49:42.865128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:49:42.886483 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 13:49:43.863085 disk-uuid[808]: Warning: The kernel is still using the old partition table. Oct 31 13:49:43.863085 disk-uuid[808]: The new table will be used at the next reboot or after you Oct 31 13:49:43.863085 disk-uuid[808]: run partprobe(8) or kpartx(8) Oct 31 13:49:43.863085 disk-uuid[808]: The operation has completed successfully. Oct 31 13:49:43.871256 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 13:49:43.871415 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 13:49:43.874421 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 13:49:43.908708 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Oct 31 13:49:43.908745 kernel: BTRFS info (device vda6): first mount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:49:43.910229 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 13:49:43.912760 kernel: BTRFS info (device vda6): turning on async discard Oct 31 13:49:43.912779 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 13:49:43.918296 kernel: BTRFS info (device vda6): last unmount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:49:43.918816 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 13:49:43.922952 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 13:49:44.016054 ignition[857]: Ignition 2.22.0 Oct 31 13:49:44.016068 ignition[857]: Stage: fetch-offline Oct 31 13:49:44.016114 ignition[857]: no configs at "/usr/lib/ignition/base.d" Oct 31 13:49:44.016129 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:49:44.016306 ignition[857]: parsed url from cmdline: "" Oct 31 13:49:44.016317 ignition[857]: no config URL provided Oct 31 13:49:44.016324 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 13:49:44.016333 ignition[857]: no config at "/usr/lib/ignition/user.ign" Oct 31 13:49:44.016373 ignition[857]: op(1): [started] loading QEMU firmware config module Oct 31 13:49:44.016377 ignition[857]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 13:49:44.021501 ignition[857]: op(1): [finished] loading QEMU firmware config module Oct 31 13:49:44.066888 ignition[857]: parsing config with SHA512: bdbaac3718113024c0b743b57a5675eccf1ed8bc4330393c0902013c282423ed7749deff7b50683f84b51a69a7f70eb16d772539ab73478972b78ce6fe38f24e Oct 31 13:49:44.071402 unknown[857]: fetched base config from "system" Oct 31 13:49:44.071416 unknown[857]: fetched user config from "qemu" Oct 31 13:49:44.071754 ignition[857]: fetch-offline: fetch-offline passed Oct 31 13:49:44.071809 ignition[857]: Ignition finished successfully Oct 31 13:49:44.074836 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 13:49:44.076207 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 13:49:44.077014 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 13:49:44.111888 ignition[872]: Ignition 2.22.0 Oct 31 13:49:44.111908 ignition[872]: Stage: kargs Oct 31 13:49:44.112049 ignition[872]: no configs at "/usr/lib/ignition/base.d" Oct 31 13:49:44.112057 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:49:44.112869 ignition[872]: kargs: kargs passed Oct 31 13:49:44.116897 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 13:49:44.112911 ignition[872]: Ignition finished successfully Oct 31 13:49:44.119246 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 13:49:44.155699 ignition[880]: Ignition 2.22.0 Oct 31 13:49:44.155716 ignition[880]: Stage: disks Oct 31 13:49:44.155861 ignition[880]: no configs at "/usr/lib/ignition/base.d" Oct 31 13:49:44.155869 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:49:44.156671 ignition[880]: disks: disks passed Oct 31 13:49:44.158653 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 13:49:44.156713 ignition[880]: Ignition finished successfully Oct 31 13:49:44.160326 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 13:49:44.161718 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 13:49:44.163694 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 13:49:44.165249 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 13:49:44.167300 systemd[1]: Reached target basic.target - Basic System. Oct 31 13:49:44.170150 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 13:49:44.212971 systemd-fsck[890]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 31 13:49:44.217112 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 13:49:44.219967 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 13:49:44.276523 systemd-networkd[741]: eth0: Gained IPv6LL Oct 31 13:49:44.283189 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 13:49:44.284811 kernel: EXT4-fs (vda9): mounted filesystem 921f74fb-be87-4ddd-b9ea-687813833434 r/w with ordered data mode. Quota mode: none. Oct 31 13:49:44.284535 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 13:49:44.287085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 13:49:44.288794 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 13:49:44.289843 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 13:49:44.289872 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 13:49:44.289910 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 13:49:44.300745 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 13:49:44.303260 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 13:49:44.307854 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (898) Oct 31 13:49:44.307877 kernel: BTRFS info (device vda6): first mount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:49:44.307887 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 13:49:44.311823 kernel: BTRFS info (device vda6): turning on async discard Oct 31 13:49:44.311848 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 13:49:44.312880 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 13:49:44.342536 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 13:49:44.346821 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory Oct 31 13:49:44.351161 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 13:49:44.355095 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 13:49:44.422520 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 13:49:44.425055 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 13:49:44.426799 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 13:49:44.443481 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 13:49:44.445721 kernel: BTRFS info (device vda6): last unmount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:49:44.457398 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 13:49:44.472434 ignition[1012]: INFO : Ignition 2.22.0 Oct 31 13:49:44.472434 ignition[1012]: INFO : Stage: mount Oct 31 13:49:44.474068 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 13:49:44.474068 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:49:44.474068 ignition[1012]: INFO : mount: mount passed Oct 31 13:49:44.474068 ignition[1012]: INFO : Ignition finished successfully Oct 31 13:49:44.475638 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 13:49:44.478191 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 13:49:45.284955 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 13:49:45.304228 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Oct 31 13:49:45.304309 kernel: BTRFS info (device vda6): first mount of filesystem 6b4c917d-79ca-40fa-acb0-df409d735ae1 Oct 31 13:49:45.304332 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 13:49:45.308018 kernel: BTRFS info (device vda6): turning on async discard Oct 31 13:49:45.308066 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 13:49:45.309357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 13:49:45.342653 ignition[1042]: INFO : Ignition 2.22.0 Oct 31 13:49:45.342653 ignition[1042]: INFO : Stage: files Oct 31 13:49:45.344392 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 13:49:45.344392 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:49:45.344392 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Oct 31 13:49:45.348006 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 13:49:45.348006 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 13:49:45.348006 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 13:49:45.348006 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 13:49:45.353701 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 13:49:45.353701 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 31 13:49:45.353701 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 31 13:49:45.348142 unknown[1042]: wrote ssh authorized keys file for user: core Oct 31 13:49:45.406873 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 13:49:45.576054 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 31 13:49:45.576054 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 13:49:45.580228 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 31 13:49:45.598616 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 31 13:49:45.598616 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 31 13:49:45.598616 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 31 13:49:45.951882 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 13:49:46.230170 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 31 13:49:46.230170 ignition[1042]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 13:49:46.235878 ignition[1042]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 13:49:46.235878 ignition[1042]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 13:49:46.235878 ignition[1042]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 13:49:46.235878 ignition[1042]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 31 13:49:46.235878 ignition[1042]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 13:49:46.235878 ignition[1042]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 13:49:46.235878 ignition[1042]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 31 13:49:46.235878 ignition[1042]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 13:49:46.251380 ignition[1042]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 13:49:46.254342 ignition[1042]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 13:49:46.257264 ignition[1042]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 13:49:46.257264 ignition[1042]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 31 13:49:46.257264 ignition[1042]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 13:49:46.257264 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 13:49:46.257264 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 13:49:46.257264 ignition[1042]: INFO : files: files passed Oct 31 13:49:46.257264 ignition[1042]: INFO : Ignition finished successfully Oct 31 13:49:46.259116 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 13:49:46.262216 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 13:49:46.265453 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 13:49:46.285016 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 13:49:46.285398 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 13:49:46.288587 initrd-setup-root-after-ignition[1073]: grep: /sysroot/oem/oem-release: No such file or directory Oct 31 13:49:46.290551 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 13:49:46.290551 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 13:49:46.293949 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 13:49:46.293133 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 13:49:46.295581 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 13:49:46.298249 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 13:49:46.335532 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 13:49:46.335648 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 13:49:46.337837 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 13:49:46.339794 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 13:49:46.341809 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 13:49:46.342533 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 13:49:46.357260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 13:49:46.359590 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 13:49:46.391686 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 13:49:46.391885 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 13:49:46.394104 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 13:49:46.396264 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 13:49:46.398300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 13:49:46.398415 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 13:49:46.401098 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 13:49:46.403253 systemd[1]: Stopped target basic.target - Basic System. Oct 31 13:49:46.405086 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 13:49:46.406909 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 13:49:46.408942 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 13:49:46.411007 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 31 13:49:46.413051 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 13:49:46.415035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 13:49:46.417102 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 13:49:46.419186 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 13:49:46.421034 systemd[1]: Stopped target swap.target - Swaps. Oct 31 13:49:46.422642 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 13:49:46.422758 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 13:49:46.425190 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 13:49:46.426420 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 13:49:46.428421 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 13:49:46.429382 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 13:49:46.431598 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 13:49:46.431717 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 13:49:46.434610 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 13:49:46.434726 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 13:49:46.436678 systemd[1]: Stopped target paths.target - Path Units. Oct 31 13:49:46.438254 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 13:49:46.443369 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 13:49:46.444684 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 13:49:46.446878 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 13:49:46.448506 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 13:49:46.448591 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 13:49:46.450235 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 13:49:46.450335 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 13:49:46.452070 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 13:49:46.452221 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 13:49:46.453974 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 13:49:46.454082 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 13:49:46.456448 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 13:49:46.459021 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 13:49:46.460317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 13:49:46.460433 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 13:49:46.462492 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 13:49:46.462597 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 13:49:46.464413 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 13:49:46.464514 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 13:49:46.469996 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 13:49:46.478753 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 13:49:46.487733 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 13:49:46.493650 ignition[1099]: INFO : Ignition 2.22.0 Oct 31 13:49:46.493650 ignition[1099]: INFO : Stage: umount Oct 31 13:49:46.495842 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 13:49:46.495842 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 13:49:46.495842 ignition[1099]: INFO : umount: umount passed Oct 31 13:49:46.495842 ignition[1099]: INFO : Ignition finished successfully Oct 31 13:49:46.497614 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 13:49:46.497702 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 13:49:46.501663 systemd[1]: Stopped target network.target - Network. Oct 31 13:49:46.503084 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 13:49:46.503156 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 13:49:46.505017 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 13:49:46.505070 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 13:49:46.506853 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 13:49:46.506904 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 13:49:46.508632 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 13:49:46.508677 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 13:49:46.510581 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 13:49:46.512370 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 13:49:46.522889 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 13:49:46.522980 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 13:49:46.524610 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 13:49:46.524696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 13:49:46.526711 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 13:49:46.526805 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 13:49:46.530157 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 13:49:46.530233 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 13:49:46.534000 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 31 13:49:46.535595 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 13:49:46.535627 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 13:49:46.538087 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 13:49:46.539172 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 13:49:46.539230 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 13:49:46.541454 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 13:49:46.541500 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 13:49:46.543320 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 13:49:46.543369 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 13:49:46.545481 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 13:49:46.557224 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 13:49:46.557401 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 13:49:46.559690 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 13:49:46.559722 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 13:49:46.561667 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 13:49:46.561698 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 13:49:46.563479 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 13:49:46.563524 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 13:49:46.566215 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 13:49:46.566265 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 13:49:46.569117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 13:49:46.569163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 13:49:46.584816 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 13:49:46.586146 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 31 13:49:46.586207 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 13:49:46.588550 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 13:49:46.588595 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 13:49:46.590771 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 13:49:46.590815 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 13:49:46.593344 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 13:49:46.593388 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 13:49:46.595515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 13:49:46.595564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:49:46.598246 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 13:49:46.598385 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 13:49:46.599812 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 13:49:46.599909 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 13:49:46.602749 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 13:49:46.604743 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 13:49:46.622969 systemd[1]: Switching root. Oct 31 13:49:46.653425 systemd-journald[345]: Journal stopped Oct 31 13:49:47.447106 systemd-journald[345]: Received SIGTERM from PID 1 (systemd). Oct 31 13:49:47.447156 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 13:49:47.447169 kernel: SELinux: policy capability open_perms=1 Oct 31 13:49:47.447179 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 13:49:47.447190 kernel: SELinux: policy capability always_check_network=0 Oct 31 13:49:47.447204 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 13:49:47.447213 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 13:49:47.447223 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 13:49:47.447232 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 13:49:47.447243 kernel: SELinux: policy capability userspace_initial_context=0 Oct 31 13:49:47.447262 kernel: audit: type=1403 audit(1761918586.844:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 13:49:47.447315 systemd[1]: Successfully loaded SELinux policy in 59.440ms. Oct 31 13:49:47.447336 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.205ms. Oct 31 13:49:47.447348 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 13:49:47.447359 systemd[1]: Detected virtualization kvm. Oct 31 13:49:47.447369 systemd[1]: Detected architecture arm64. Oct 31 13:49:47.447382 systemd[1]: Detected first boot. Oct 31 13:49:47.447392 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 31 13:49:47.447402 zram_generator::config[1150]: No configuration found. Oct 31 13:49:47.447414 kernel: NET: Registered PF_VSOCK protocol family Oct 31 13:49:47.447424 systemd[1]: Populated /etc with preset unit settings. Oct 31 13:49:47.447434 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 13:49:47.447446 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 13:49:47.447456 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 13:49:47.447468 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 13:49:47.447478 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 13:49:47.447488 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 13:49:47.447498 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 13:49:47.447509 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 13:49:47.447520 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 13:49:47.447531 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 13:49:47.447541 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 13:49:47.447551 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 13:49:47.447565 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 13:49:47.447575 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 13:49:47.447586 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 13:49:47.447598 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 13:49:47.447608 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 13:49:47.447618 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 31 13:49:47.447628 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 13:49:47.447639 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 13:49:47.447650 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 13:49:47.447661 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 13:49:47.447672 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 13:49:47.447683 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 13:49:47.447693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 13:49:47.447703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 13:49:47.447714 systemd[1]: Reached target slices.target - Slice Units. Oct 31 13:49:47.447724 systemd[1]: Reached target swap.target - Swaps. Oct 31 13:49:47.447736 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 13:49:47.447747 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 13:49:47.447759 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 31 13:49:47.447769 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 13:49:47.447781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 13:49:47.447791 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 13:49:47.447801 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 13:49:47.447812 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 13:49:47.447823 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 13:49:47.447834 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 13:49:47.447844 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 13:49:47.447855 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 13:49:47.447865 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 13:49:47.447876 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 13:49:47.447887 systemd[1]: Reached target machines.target - Containers. Oct 31 13:49:47.447898 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 13:49:47.447909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 13:49:47.447920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 13:49:47.447930 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 13:49:47.447940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 13:49:47.447950 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 13:49:47.447962 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 13:49:47.447973 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 13:49:47.447983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 13:49:47.447994 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 13:49:47.448005 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 13:49:47.448015 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 13:49:47.448025 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 13:49:47.448037 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 13:49:47.448048 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 13:49:47.448058 kernel: fuse: init (API version 7.41) Oct 31 13:49:47.448068 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 13:49:47.448078 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 13:49:47.448089 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 13:49:47.448099 kernel: ACPI: bus type drm_connector registered Oct 31 13:49:47.448110 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 13:49:47.448121 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 31 13:49:47.448131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 13:49:47.448141 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 13:49:47.448151 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 13:49:47.448163 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 13:49:47.448174 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 13:49:47.448185 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 13:49:47.448212 systemd-journald[1225]: Collecting audit messages is disabled. Oct 31 13:49:47.448234 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 13:49:47.448247 systemd-journald[1225]: Journal started Oct 31 13:49:47.448268 systemd-journald[1225]: Runtime Journal (/run/log/journal/7b898b1ce63a41c89514e23802b32e14) is 6M, max 48.5M, 42.4M free. Oct 31 13:49:47.205074 systemd[1]: Queued start job for default target multi-user.target. Oct 31 13:49:47.228000 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 13:49:47.228483 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 13:49:47.451110 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 13:49:47.453328 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 13:49:47.454737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 13:49:47.456262 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 13:49:47.457326 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 13:49:47.458759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 13:49:47.458921 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 13:49:47.460371 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 13:49:47.460547 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 13:49:47.461876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 13:49:47.462051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 13:49:47.463554 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 13:49:47.463717 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 13:49:47.465054 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 13:49:47.465200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 13:49:47.466683 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 13:49:47.468186 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 13:49:47.470478 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 13:49:47.472147 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 31 13:49:47.484554 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 13:49:47.486251 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 31 13:49:47.488621 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 13:49:47.490661 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 13:49:47.491873 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 13:49:47.491914 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 13:49:47.493809 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 31 13:49:47.495482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 13:49:47.503200 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 13:49:47.505443 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 13:49:47.506674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 13:49:47.507613 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 13:49:47.508876 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 13:49:47.510541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 13:49:47.514180 systemd-journald[1225]: Time spent on flushing to /var/log/journal/7b898b1ce63a41c89514e23802b32e14 is 10.884ms for 870 entries. Oct 31 13:49:47.514180 systemd-journald[1225]: System Journal (/var/log/journal/7b898b1ce63a41c89514e23802b32e14) is 8M, max 163.5M, 155.5M free. Oct 31 13:49:47.543452 systemd-journald[1225]: Received client request to flush runtime journal. Oct 31 13:49:47.543504 kernel: loop1: detected capacity change from 0 to 200800 Oct 31 13:49:47.516443 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 13:49:47.531087 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 13:49:47.533888 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 13:49:47.535990 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 13:49:47.538199 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 13:49:47.541319 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 13:49:47.546040 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 13:49:47.549778 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 13:49:47.551494 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Oct 31 13:49:47.551742 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Oct 31 13:49:47.553240 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 31 13:49:47.556475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 13:49:47.558644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 13:49:47.565092 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 13:49:47.573300 kernel: loop2: detected capacity change from 0 to 100192 Oct 31 13:49:47.579081 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 31 13:49:47.590184 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 13:49:47.593009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 13:49:47.595562 kernel: loop3: detected capacity change from 0 to 119400 Oct 31 13:49:47.595450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 13:49:47.616690 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 13:49:47.624138 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Oct 31 13:49:47.624152 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Oct 31 13:49:47.626522 kernel: loop4: detected capacity change from 0 to 200800 Oct 31 13:49:47.628036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 13:49:47.633329 kernel: loop5: detected capacity change from 0 to 100192 Oct 31 13:49:47.638732 kernel: loop6: detected capacity change from 0 to 119400 Oct 31 13:49:47.641634 (sd-merge)[1291]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 31 13:49:47.644169 (sd-merge)[1291]: Merged extensions into '/usr'. Oct 31 13:49:47.648171 systemd[1]: Reload requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 13:49:47.648193 systemd[1]: Reloading... Oct 31 13:49:47.702740 systemd-resolved[1286]: Positive Trust Anchors: Oct 31 13:49:47.703030 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 13:49:47.703037 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 13:49:47.703068 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 13:49:47.706437 zram_generator::config[1328]: No configuration found. Oct 31 13:49:47.709438 systemd-resolved[1286]: Defaulting to hostname 'linux'. Oct 31 13:49:47.836035 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 13:49:47.836355 systemd[1]: Reloading finished in 187 ms. Oct 31 13:49:47.866758 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 13:49:47.868175 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 13:49:47.869748 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 13:49:47.872911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 13:49:47.888441 systemd[1]: Starting ensure-sysext.service... Oct 31 13:49:47.890294 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 13:49:47.899730 systemd[1]: Reload requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Oct 31 13:49:47.899749 systemd[1]: Reloading... Oct 31 13:49:47.907133 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 31 13:49:47.907520 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 31 13:49:47.907809 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 13:49:47.908073 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 13:49:47.908782 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 13:49:47.909060 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Oct 31 13:49:47.909165 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Oct 31 13:49:47.912511 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 13:49:47.912613 systemd-tmpfiles[1359]: Skipping /boot Oct 31 13:49:47.918574 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 13:49:47.918667 systemd-tmpfiles[1359]: Skipping /boot Oct 31 13:49:47.942307 zram_generator::config[1388]: No configuration found. Oct 31 13:49:48.072382 systemd[1]: Reloading finished in 172 ms. Oct 31 13:49:48.095833 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 13:49:48.114029 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 13:49:48.121567 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 13:49:48.123768 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 13:49:48.136039 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 13:49:48.140029 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 13:49:48.142566 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 13:49:48.146698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 13:49:48.151564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 13:49:48.159928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 13:49:48.162223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 13:49:48.164557 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 13:49:48.166017 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 13:49:48.166150 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 13:49:48.169331 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 13:49:48.176065 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 13:49:48.179140 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 13:49:48.179335 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 13:49:48.181215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 13:49:48.181399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 13:49:48.191098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 13:49:48.192446 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 13:49:48.194505 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 13:49:48.195349 systemd-udevd[1430]: Using default interface naming scheme 'v257'. Oct 31 13:49:48.197103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 13:49:48.199526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 13:49:48.199646 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 13:49:48.200467 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 13:49:48.210249 systemd[1]: Finished ensure-sysext.service. Oct 31 13:49:48.212138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 13:49:48.212617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 13:49:48.214525 augenrules[1461]: No rules Oct 31 13:49:48.216042 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 13:49:48.217362 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 13:49:48.218703 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 13:49:48.218981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 13:49:48.220366 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 13:49:48.222598 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 13:49:48.222739 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 13:49:48.224202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 13:49:48.224439 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 13:49:48.232064 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 13:49:48.233332 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 13:49:48.233398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 13:49:48.235131 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 13:49:48.236541 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 13:49:48.314845 systemd-networkd[1491]: lo: Link UP Oct 31 13:49:48.314853 systemd-networkd[1491]: lo: Gained carrier Oct 31 13:49:48.316019 systemd-networkd[1491]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 13:49:48.316029 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 13:49:48.316136 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 13:49:48.316532 systemd-networkd[1491]: eth0: Link UP Oct 31 13:49:48.316656 systemd-networkd[1491]: eth0: Gained carrier Oct 31 13:49:48.316670 systemd-networkd[1491]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 13:49:48.318395 systemd[1]: Reached target network.target - Network. Oct 31 13:49:48.322057 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 31 13:49:48.324501 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 13:49:48.326525 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 13:49:48.330050 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 31 13:49:48.330150 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 13:49:48.333356 systemd-networkd[1491]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 13:49:48.334030 systemd-timesyncd[1492]: Network configuration changed, trying to establish connection. Oct 31 13:49:48.336743 systemd-timesyncd[1492]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 13:49:48.336805 systemd-timesyncd[1492]: Initial clock synchronization to Fri 2025-10-31 13:49:48.659852 UTC. Oct 31 13:49:48.345823 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 31 13:49:48.358213 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 13:49:48.360725 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 13:49:48.378644 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 13:49:48.426298 ldconfig[1427]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 13:49:48.431128 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 13:49:48.439481 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 13:49:48.441750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 13:49:48.458821 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 13:49:48.486428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 13:49:48.488901 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 13:49:48.490117 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 13:49:48.491422 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 13:49:48.492794 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 13:49:48.493961 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 13:49:48.495261 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 13:49:48.496503 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 13:49:48.496538 systemd[1]: Reached target paths.target - Path Units. Oct 31 13:49:48.497563 systemd[1]: Reached target timers.target - Timer Units. Oct 31 13:49:48.499239 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 13:49:48.501547 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 13:49:48.504145 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 31 13:49:48.505611 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 31 13:49:48.506871 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 31 13:49:48.509873 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 13:49:48.511197 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 31 13:49:48.512925 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 13:49:48.514116 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 13:49:48.515111 systemd[1]: Reached target basic.target - Basic System. Oct 31 13:49:48.516134 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 13:49:48.516164 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 13:49:48.516984 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 13:49:48.518995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 13:49:48.520815 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 13:49:48.522841 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 13:49:48.525516 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 13:49:48.526524 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 13:49:48.527445 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 13:49:48.530389 jq[1540]: false Oct 31 13:49:48.530397 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 13:49:48.532161 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 13:49:48.535245 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 13:49:48.536898 extend-filesystems[1541]: Found /dev/vda6 Oct 31 13:49:48.539267 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 13:49:48.540350 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 13:49:48.540708 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 13:49:48.541191 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 13:49:48.543800 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 13:49:48.546298 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 13:49:48.549331 extend-filesystems[1541]: Found /dev/vda9 Oct 31 13:49:48.549592 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 13:49:48.549742 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 13:49:48.549951 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 13:49:48.550131 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 13:49:48.550526 extend-filesystems[1541]: Checking size of /dev/vda9 Oct 31 13:49:48.552676 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 13:49:48.552834 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 13:49:48.558397 jq[1559]: true Oct 31 13:49:48.569976 tar[1564]: linux-arm64/LICENSE Oct 31 13:49:48.570813 extend-filesystems[1541]: Resized partition /dev/vda9 Oct 31 13:49:48.572411 update_engine[1553]: I20251031 13:49:48.570719 1553 main.cc:92] Flatcar Update Engine starting Oct 31 13:49:48.572609 tar[1564]: linux-arm64/helm Oct 31 13:49:48.581686 extend-filesystems[1579]: resize2fs 1.47.3 (8-Jul-2025) Oct 31 13:49:48.584964 jq[1570]: true Oct 31 13:49:48.590303 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 31 13:49:48.593937 dbus-daemon[1538]: [system] SELinux support is enabled Oct 31 13:49:48.596765 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 13:49:48.599525 update_engine[1553]: I20251031 13:49:48.599360 1553 update_check_scheduler.cc:74] Next update check in 4m17s Oct 31 13:49:48.601709 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 13:49:48.601737 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 13:49:48.604032 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 13:49:48.604053 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 13:49:48.605802 systemd[1]: Started update-engine.service - Update Engine. Oct 31 13:49:48.612138 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 13:49:48.640437 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 31 13:49:48.659853 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 13:49:48.659853 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 13:49:48.659853 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 31 13:49:48.666133 extend-filesystems[1541]: Resized filesystem in /dev/vda9 Oct 31 13:49:48.667067 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Oct 31 13:49:48.661096 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 13:49:48.661365 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 13:49:48.665784 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 13:49:48.671549 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 13:49:48.672144 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 13:49:48.674928 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (Power Button) Oct 31 13:49:48.675123 systemd-logind[1551]: New seat seat0. Oct 31 13:49:48.675687 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 13:49:48.743497 containerd[1578]: time="2025-10-31T13:49:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 31 13:49:48.747285 containerd[1578]: time="2025-10-31T13:49:48.745648680Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 31 13:49:48.764098 containerd[1578]: time="2025-10-31T13:49:48.764052080Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.88µs" Oct 31 13:49:48.764098 containerd[1578]: time="2025-10-31T13:49:48.764088440Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 31 13:49:48.764171 containerd[1578]: time="2025-10-31T13:49:48.764106520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 31 13:49:48.764261 containerd[1578]: time="2025-10-31T13:49:48.764237640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 31 13:49:48.764261 containerd[1578]: time="2025-10-31T13:49:48.764258400Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 31 13:49:48.764328 containerd[1578]: time="2025-10-31T13:49:48.764305480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 13:49:48.765228 containerd[1578]: time="2025-10-31T13:49:48.765162080Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 13:49:48.765258 containerd[1578]: time="2025-10-31T13:49:48.765227240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 13:49:48.765721 containerd[1578]: time="2025-10-31T13:49:48.765690480Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 13:49:48.765721 containerd[1578]: time="2025-10-31T13:49:48.765717120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 13:49:48.765767 containerd[1578]: time="2025-10-31T13:49:48.765731800Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 13:49:48.765767 containerd[1578]: time="2025-10-31T13:49:48.765739880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 31 13:49:48.765849 containerd[1578]: time="2025-10-31T13:49:48.765829480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 31 13:49:48.766677 containerd[1578]: time="2025-10-31T13:49:48.766585840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 13:49:48.766715 containerd[1578]: time="2025-10-31T13:49:48.766692280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 13:49:48.766715 containerd[1578]: time="2025-10-31T13:49:48.766704720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 31 13:49:48.766761 containerd[1578]: time="2025-10-31T13:49:48.766742000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 31 13:49:48.767116 containerd[1578]: time="2025-10-31T13:49:48.767044040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 31 13:49:48.767189 containerd[1578]: time="2025-10-31T13:49:48.767169320Z" level=info msg="metadata content store policy set" policy=shared Oct 31 13:49:48.770673 containerd[1578]: time="2025-10-31T13:49:48.770637360Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 31 13:49:48.770758 containerd[1578]: time="2025-10-31T13:49:48.770736440Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 31 13:49:48.770800 containerd[1578]: time="2025-10-31T13:49:48.770760800Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 31 13:49:48.770819 containerd[1578]: time="2025-10-31T13:49:48.770775440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 31 13:49:48.770837 containerd[1578]: time="2025-10-31T13:49:48.770818600Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 31 13:49:48.770837 containerd[1578]: time="2025-10-31T13:49:48.770832600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 31 13:49:48.770868 containerd[1578]: time="2025-10-31T13:49:48.770845920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 31 13:49:48.770868 containerd[1578]: time="2025-10-31T13:49:48.770862280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 31 13:49:48.770901 containerd[1578]: time="2025-10-31T13:49:48.770872760Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 31 13:49:48.770901 containerd[1578]: time="2025-10-31T13:49:48.770882920Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 31 13:49:48.770901 containerd[1578]: time="2025-10-31T13:49:48.770891160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 31 13:49:48.770950 containerd[1578]: time="2025-10-31T13:49:48.770902280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 31 13:49:48.771029 containerd[1578]: time="2025-10-31T13:49:48.771006600Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 31 13:49:48.771057 containerd[1578]: time="2025-10-31T13:49:48.771034120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 31 13:49:48.771081 containerd[1578]: time="2025-10-31T13:49:48.771055000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 31 13:49:48.771081 containerd[1578]: time="2025-10-31T13:49:48.771067720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 31 13:49:48.771081 containerd[1578]: time="2025-10-31T13:49:48.771078480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 31 13:49:48.771127 containerd[1578]: time="2025-10-31T13:49:48.771088400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 31 13:49:48.771127 containerd[1578]: time="2025-10-31T13:49:48.771099480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 31 13:49:48.771127 containerd[1578]: time="2025-10-31T13:49:48.771110200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 31 13:49:48.771127 containerd[1578]: time="2025-10-31T13:49:48.771123800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 31 13:49:48.771194 containerd[1578]: time="2025-10-31T13:49:48.771135560Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 31 13:49:48.771194 containerd[1578]: time="2025-10-31T13:49:48.771145880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 31 13:49:48.771531 containerd[1578]: time="2025-10-31T13:49:48.771497000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 31 13:49:48.771574 containerd[1578]: time="2025-10-31T13:49:48.771533240Z" level=info msg="Start snapshots syncer" Oct 31 13:49:48.771623 containerd[1578]: time="2025-10-31T13:49:48.771602480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 31 13:49:48.771922 containerd[1578]: time="2025-10-31T13:49:48.771876840Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 31 13:49:48.772014 containerd[1578]: time="2025-10-31T13:49:48.771931840Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 31 13:49:48.772068 containerd[1578]: time="2025-10-31T13:49:48.772044440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 31 13:49:48.772375 containerd[1578]: time="2025-10-31T13:49:48.772350200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 31 13:49:48.772420 containerd[1578]: time="2025-10-31T13:49:48.772395880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 31 13:49:48.772420 containerd[1578]: time="2025-10-31T13:49:48.772408520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 31 13:49:48.772454 containerd[1578]: time="2025-10-31T13:49:48.772419320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 31 13:49:48.772454 containerd[1578]: time="2025-10-31T13:49:48.772430800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 31 13:49:48.772454 containerd[1578]: time="2025-10-31T13:49:48.772440760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 31 13:49:48.772454 containerd[1578]: time="2025-10-31T13:49:48.772451000Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 31 13:49:48.772523 containerd[1578]: time="2025-10-31T13:49:48.772479400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 31 13:49:48.772523 containerd[1578]: time="2025-10-31T13:49:48.772491160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 31 13:49:48.772523 containerd[1578]: time="2025-10-31T13:49:48.772508400Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 31 13:49:48.772570 containerd[1578]: time="2025-10-31T13:49:48.772537960Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 13:49:48.772570 containerd[1578]: time="2025-10-31T13:49:48.772551000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 13:49:48.772570 containerd[1578]: time="2025-10-31T13:49:48.772558880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 13:49:48.772570 containerd[1578]: time="2025-10-31T13:49:48.772567560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 13:49:48.772635 containerd[1578]: time="2025-10-31T13:49:48.772575280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 31 13:49:48.772635 containerd[1578]: time="2025-10-31T13:49:48.772584960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 31 13:49:48.772635 containerd[1578]: time="2025-10-31T13:49:48.772594280Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 31 13:49:48.772682 containerd[1578]: time="2025-10-31T13:49:48.772671480Z" level=info msg="runtime interface created" Oct 31 13:49:48.772682 containerd[1578]: time="2025-10-31T13:49:48.772677120Z" level=info msg="created NRI interface" Oct 31 13:49:48.772714 containerd[1578]: time="2025-10-31T13:49:48.772685480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 31 13:49:48.772714 containerd[1578]: time="2025-10-31T13:49:48.772696600Z" level=info msg="Connect containerd service" Oct 31 13:49:48.772748 containerd[1578]: time="2025-10-31T13:49:48.772724800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 13:49:48.773844 containerd[1578]: time="2025-10-31T13:49:48.773814360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 13:49:48.786164 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 13:49:48.807911 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 13:49:48.810763 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 13:49:48.829766 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 13:49:48.830012 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 13:49:48.833038 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 13:49:48.841834 containerd[1578]: time="2025-10-31T13:49:48.841769480Z" level=info msg="Start subscribing containerd event" Oct 31 13:49:48.841916 containerd[1578]: time="2025-10-31T13:49:48.841857040Z" level=info msg="Start recovering state" Oct 31 13:49:48.841935 containerd[1578]: time="2025-10-31T13:49:48.841903960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 13:49:48.841953 containerd[1578]: time="2025-10-31T13:49:48.841946160Z" level=info msg="Start event monitor" Oct 31 13:49:48.841971 containerd[1578]: time="2025-10-31T13:49:48.841958280Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 13:49:48.841971 containerd[1578]: time="2025-10-31T13:49:48.841959440Z" level=info msg="Start cni network conf syncer for default" Oct 31 13:49:48.842003 containerd[1578]: time="2025-10-31T13:49:48.841974360Z" level=info msg="Start streaming server" Oct 31 13:49:48.842076 containerd[1578]: time="2025-10-31T13:49:48.842058280Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 31 13:49:48.842076 containerd[1578]: time="2025-10-31T13:49:48.842070680Z" level=info msg="runtime interface starting up..." Oct 31 13:49:48.842120 containerd[1578]: time="2025-10-31T13:49:48.842077760Z" level=info msg="starting plugins..." Oct 31 13:49:48.842120 containerd[1578]: time="2025-10-31T13:49:48.842091800Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 31 13:49:48.842349 containerd[1578]: time="2025-10-31T13:49:48.842326800Z" level=info msg="containerd successfully booted in 0.099180s" Oct 31 13:49:48.842412 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 13:49:48.851311 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 13:49:48.854001 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 13:49:48.856060 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 31 13:49:48.857513 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 13:49:48.927174 tar[1564]: linux-arm64/README.md Oct 31 13:49:48.955643 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 13:49:49.973755 systemd-networkd[1491]: eth0: Gained IPv6LL Oct 31 13:49:49.975992 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 13:49:49.977799 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 13:49:49.981645 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 31 13:49:49.983925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:49:49.986027 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 13:49:50.016173 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 13:49:50.016608 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 31 13:49:50.018658 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 13:49:50.022222 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 13:49:50.544369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:49:50.546054 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 13:49:50.547966 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 13:49:50.548209 systemd[1]: Startup finished in 1.422s (kernel) + 4.828s (initrd) + 3.763s (userspace) = 10.014s. Oct 31 13:49:50.859081 kubelet[1677]: E1031 13:49:50.858963 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 13:49:50.861534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 13:49:50.861686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 13:49:50.862032 systemd[1]: kubelet.service: Consumed 677ms CPU time, 248M memory peak. Oct 31 13:49:53.365234 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 13:49:53.366442 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:37782.service - OpenSSH per-connection server daemon (10.0.0.1:37782). Oct 31 13:49:53.475138 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 37782 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:49:53.476781 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:49:53.484535 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 13:49:53.485407 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 13:49:53.491609 systemd-logind[1551]: New session 1 of user core. Oct 31 13:49:53.503599 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 13:49:53.506738 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 13:49:53.520130 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 13:49:53.522265 systemd-logind[1551]: New session c1 of user core. Oct 31 13:49:53.607987 systemd[1695]: Queued start job for default target default.target. Oct 31 13:49:53.619552 systemd[1695]: Created slice app.slice - User Application Slice. Oct 31 13:49:53.619580 systemd[1695]: Reached target paths.target - Paths. Oct 31 13:49:53.619612 systemd[1695]: Reached target timers.target - Timers. Oct 31 13:49:53.620632 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 13:49:53.629365 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 13:49:53.629418 systemd[1695]: Reached target sockets.target - Sockets. Oct 31 13:49:53.629453 systemd[1695]: Reached target basic.target - Basic System. Oct 31 13:49:53.629491 systemd[1695]: Reached target default.target - Main User Target. Oct 31 13:49:53.629513 systemd[1695]: Startup finished in 101ms. Oct 31 13:49:53.630074 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 13:49:53.631960 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 13:49:53.645336 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:37790.service - OpenSSH per-connection server daemon (10.0.0.1:37790). Oct 31 13:49:53.707391 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 37790 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:49:53.708557 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:49:53.713049 systemd-logind[1551]: New session 2 of user core. Oct 31 13:49:53.723551 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 13:49:53.738554 sshd[1709]: Connection closed by 10.0.0.1 port 37790 Oct 31 13:49:53.738491 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Oct 31 13:49:53.760427 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:37790.service: Deactivated successfully. Oct 31 13:49:53.763667 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 13:49:53.764714 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. Oct 31 13:49:53.766754 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:37796.service - OpenSSH per-connection server daemon (10.0.0.1:37796). Oct 31 13:49:53.767282 systemd-logind[1551]: Removed session 2. Oct 31 13:49:53.830214 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 37796 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:49:53.832694 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:49:53.839215 systemd-logind[1551]: New session 3 of user core. Oct 31 13:49:53.851460 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 13:49:53.859943 sshd[1718]: Connection closed by 10.0.0.1 port 37796 Oct 31 13:49:53.859844 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Oct 31 13:49:53.872120 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:37796.service: Deactivated successfully. Oct 31 13:49:53.875098 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 13:49:53.876583 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. Oct 31 13:49:53.878208 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:37808.service - OpenSSH per-connection server daemon (10.0.0.1:37808). Oct 31 13:49:53.879192 systemd-logind[1551]: Removed session 3. Oct 31 13:49:53.951150 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 37808 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:49:53.954814 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:49:53.959324 systemd-logind[1551]: New session 4 of user core. Oct 31 13:49:53.974433 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 13:49:53.985280 sshd[1727]: Connection closed by 10.0.0.1 port 37808 Oct 31 13:49:53.985914 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Oct 31 13:49:53.998415 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:37808.service: Deactivated successfully. Oct 31 13:49:54.003571 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 13:49:54.004392 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Oct 31 13:49:54.010541 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:37812.service - OpenSSH per-connection server daemon (10.0.0.1:37812). Oct 31 13:49:54.011217 systemd-logind[1551]: Removed session 4. Oct 31 13:49:54.071030 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 37812 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:49:54.075067 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:49:54.079360 systemd-logind[1551]: New session 5 of user core. Oct 31 13:49:54.089453 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 13:49:54.108038 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 13:49:54.108314 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 13:49:54.121225 sudo[1737]: pam_unix(sudo:session): session closed for user root Oct 31 13:49:54.123034 sshd[1736]: Connection closed by 10.0.0.1 port 37812 Oct 31 13:49:54.123578 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Oct 31 13:49:54.137395 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:37812.service: Deactivated successfully. Oct 31 13:49:54.142404 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 13:49:54.145147 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Oct 31 13:49:54.148178 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:37828.service - OpenSSH per-connection server daemon (10.0.0.1:37828). Oct 31 13:49:54.148792 systemd-logind[1551]: Removed session 5. Oct 31 13:49:54.210374 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 37828 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:49:54.211233 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:49:54.216234 systemd-logind[1551]: New session 6 of user core. Oct 31 13:49:54.227479 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 13:49:54.239944 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 13:49:54.240444 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 13:49:54.286411 sudo[1748]: pam_unix(sudo:session): session closed for user root Oct 31 13:49:54.292475 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 31 13:49:54.296018 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 13:49:54.306568 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 13:49:54.348961 augenrules[1770]: No rules Oct 31 13:49:54.349990 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 13:49:54.350319 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 13:49:54.352197 sudo[1747]: pam_unix(sudo:session): session closed for user root Oct 31 13:49:54.353767 sshd[1746]: Connection closed by 10.0.0.1 port 37828 Oct 31 13:49:54.354445 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Oct 31 13:49:54.366021 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:37828.service: Deactivated successfully. Oct 31 13:49:54.370713 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 13:49:54.371630 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. Oct 31 13:49:54.373742 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:37842.service - OpenSSH per-connection server daemon (10.0.0.1:37842). Oct 31 13:49:54.378079 systemd-logind[1551]: Removed session 6. Oct 31 13:49:54.438108 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 37842 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:49:54.436039 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:49:54.446566 systemd-logind[1551]: New session 7 of user core. Oct 31 13:49:54.451438 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 13:49:54.462566 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 13:49:54.462896 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 13:49:54.744528 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 13:49:54.764566 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 13:49:54.965327 dockerd[1803]: time="2025-10-31T13:49:54.965135758Z" level=info msg="Starting up" Oct 31 13:49:54.966366 dockerd[1803]: time="2025-10-31T13:49:54.966343396Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 31 13:49:54.976241 dockerd[1803]: time="2025-10-31T13:49:54.976211590Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 31 13:49:55.108785 dockerd[1803]: time="2025-10-31T13:49:55.108579166Z" level=info msg="Loading containers: start." Oct 31 13:49:55.118351 kernel: Initializing XFRM netlink socket Oct 31 13:49:55.296523 systemd-networkd[1491]: docker0: Link UP Oct 31 13:49:55.299851 dockerd[1803]: time="2025-10-31T13:49:55.299819078Z" level=info msg="Loading containers: done." Oct 31 13:49:55.311025 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1846372560-merged.mount: Deactivated successfully. Oct 31 13:49:55.312620 dockerd[1803]: time="2025-10-31T13:49:55.312579138Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 13:49:55.312679 dockerd[1803]: time="2025-10-31T13:49:55.312660263Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 31 13:49:55.312819 dockerd[1803]: time="2025-10-31T13:49:55.312797182Z" level=info msg="Initializing buildkit" Oct 31 13:49:55.331749 dockerd[1803]: time="2025-10-31T13:49:55.331714830Z" level=info msg="Completed buildkit initialization" Oct 31 13:49:55.337691 dockerd[1803]: time="2025-10-31T13:49:55.337655515Z" level=info msg="Daemon has completed initialization" Oct 31 13:49:55.337942 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 13:49:55.338252 dockerd[1803]: time="2025-10-31T13:49:55.337726825Z" level=info msg="API listen on /run/docker.sock" Oct 31 13:49:55.770539 containerd[1578]: time="2025-10-31T13:49:55.770439266Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 31 13:49:56.337267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495298863.mount: Deactivated successfully. Oct 31 13:49:57.324307 containerd[1578]: time="2025-10-31T13:49:57.324105410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:49:57.325288 containerd[1578]: time="2025-10-31T13:49:57.325241730Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Oct 31 13:49:57.326769 containerd[1578]: time="2025-10-31T13:49:57.326725730Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:49:57.333214 containerd[1578]: time="2025-10-31T13:49:57.332688378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:49:57.334578 containerd[1578]: time="2025-10-31T13:49:57.334548204Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.564074029s" Oct 31 13:49:57.334678 containerd[1578]: time="2025-10-31T13:49:57.334663219Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 31 13:49:57.335239 containerd[1578]: time="2025-10-31T13:49:57.335219963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 31 13:49:58.384048 containerd[1578]: time="2025-10-31T13:49:58.383992540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:49:58.385572 containerd[1578]: time="2025-10-31T13:49:58.385545377Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Oct 31 13:49:58.386426 containerd[1578]: time="2025-10-31T13:49:58.386372302Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:49:58.388823 containerd[1578]: time="2025-10-31T13:49:58.388795789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:49:58.389803 containerd[1578]: time="2025-10-31T13:49:58.389779677Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.054358575s" Oct 31 13:49:58.389895 containerd[1578]: time="2025-10-31T13:49:58.389872107Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 31 13:49:58.390578 containerd[1578]: time="2025-10-31T13:49:58.390444537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 31 13:49:59.242876 containerd[1578]: time="2025-10-31T13:49:59.242822076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:49:59.244115 containerd[1578]: time="2025-10-31T13:49:59.243882509Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Oct 31 13:49:59.244856 containerd[1578]: time="2025-10-31T13:49:59.244824901Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:49:59.247647 containerd[1578]: time="2025-10-31T13:49:59.247586708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:49:59.249110 containerd[1578]: time="2025-10-31T13:49:59.248988491Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 858.51405ms" Oct 31 13:49:59.249110 containerd[1578]: time="2025-10-31T13:49:59.249023338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 31 13:49:59.249517 containerd[1578]: time="2025-10-31T13:49:59.249494332Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 31 13:50:00.339537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount199970066.mount: Deactivated successfully. Oct 31 13:50:00.493914 containerd[1578]: time="2025-10-31T13:50:00.493845590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:00.494405 containerd[1578]: time="2025-10-31T13:50:00.494362069Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Oct 31 13:50:00.495316 containerd[1578]: time="2025-10-31T13:50:00.495234186Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:00.497113 containerd[1578]: time="2025-10-31T13:50:00.497062554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:00.497594 containerd[1578]: time="2025-10-31T13:50:00.497569264Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.248033589s" Oct 31 13:50:00.497626 containerd[1578]: time="2025-10-31T13:50:00.497600108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 31 13:50:00.498060 containerd[1578]: time="2025-10-31T13:50:00.498040405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 31 13:50:00.987923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 13:50:00.989455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:50:00.994821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677909993.mount: Deactivated successfully. Oct 31 13:50:01.122713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:50:01.127115 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 13:50:01.168037 kubelet[2118]: E1031 13:50:01.167978 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 13:50:01.171802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 13:50:01.171933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 13:50:01.172212 systemd[1]: kubelet.service: Consumed 143ms CPU time, 106.7M memory peak. Oct 31 13:50:02.024617 containerd[1578]: time="2025-10-31T13:50:02.024562027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:02.026472 containerd[1578]: time="2025-10-31T13:50:02.026417345Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Oct 31 13:50:02.029130 containerd[1578]: time="2025-10-31T13:50:02.028201802Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:02.031723 containerd[1578]: time="2025-10-31T13:50:02.031671978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:02.033116 containerd[1578]: time="2025-10-31T13:50:02.033076025Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.535011049s" Oct 31 13:50:02.033160 containerd[1578]: time="2025-10-31T13:50:02.033114618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 31 13:50:02.033748 containerd[1578]: time="2025-10-31T13:50:02.033710713Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 31 13:50:02.483340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3926432641.mount: Deactivated successfully. Oct 31 13:50:02.490361 containerd[1578]: time="2025-10-31T13:50:02.490313560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:02.490762 containerd[1578]: time="2025-10-31T13:50:02.490720638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Oct 31 13:50:02.491650 containerd[1578]: time="2025-10-31T13:50:02.491613269Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:02.493846 containerd[1578]: time="2025-10-31T13:50:02.493812096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:02.495098 containerd[1578]: time="2025-10-31T13:50:02.495060684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 461.318469ms" Oct 31 13:50:02.495133 containerd[1578]: time="2025-10-31T13:50:02.495093758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 31 13:50:02.495628 containerd[1578]: time="2025-10-31T13:50:02.495597439Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 31 13:50:05.583172 containerd[1578]: time="2025-10-31T13:50:05.583102271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:05.584720 containerd[1578]: time="2025-10-31T13:50:05.584689407Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Oct 31 13:50:05.585528 containerd[1578]: time="2025-10-31T13:50:05.585485466Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:05.599098 containerd[1578]: time="2025-10-31T13:50:05.598381373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:05.599331 containerd[1578]: time="2025-10-31T13:50:05.599305681Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.103637616s" Oct 31 13:50:05.599396 containerd[1578]: time="2025-10-31T13:50:05.599383290Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 31 13:50:10.811548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:50:10.811685 systemd[1]: kubelet.service: Consumed 143ms CPU time, 106.7M memory peak. Oct 31 13:50:10.813474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:50:10.834541 systemd[1]: Reload requested from client PID 2244 ('systemctl') (unit session-7.scope)... Oct 31 13:50:10.834559 systemd[1]: Reloading... Oct 31 13:50:10.900325 zram_generator::config[2291]: No configuration found. Oct 31 13:50:11.177405 systemd[1]: Reloading finished in 342 ms. Oct 31 13:50:11.228952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:50:11.231206 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:50:11.233139 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 13:50:11.233494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:50:11.234369 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95.2M memory peak. Oct 31 13:50:11.235637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:50:11.355192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:50:11.359426 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 13:50:11.390519 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 13:50:11.390519 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 13:50:11.391062 kubelet[2335]: I1031 13:50:11.391020 2335 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 13:50:11.942910 kubelet[2335]: I1031 13:50:11.942865 2335 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 13:50:11.942910 kubelet[2335]: I1031 13:50:11.942896 2335 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 13:50:11.944020 kubelet[2335]: I1031 13:50:11.943992 2335 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 13:50:11.944020 kubelet[2335]: I1031 13:50:11.944014 2335 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 13:50:11.944322 kubelet[2335]: I1031 13:50:11.944305 2335 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 13:50:12.032335 kubelet[2335]: E1031 13:50:12.032297 2335 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 13:50:12.032586 kubelet[2335]: I1031 13:50:12.032493 2335 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 13:50:12.035466 kubelet[2335]: I1031 13:50:12.035448 2335 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 13:50:12.037934 kubelet[2335]: I1031 13:50:12.037916 2335 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 13:50:12.038149 kubelet[2335]: I1031 13:50:12.038125 2335 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 13:50:12.038319 kubelet[2335]: I1031 13:50:12.038151 2335 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 13:50:12.038416 kubelet[2335]: I1031 13:50:12.038323 2335 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 13:50:12.038416 kubelet[2335]: I1031 13:50:12.038333 2335 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 13:50:12.038470 kubelet[2335]: I1031 13:50:12.038440 2335 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 13:50:12.040796 kubelet[2335]: I1031 13:50:12.040753 2335 state_mem.go:36] "Initialized new in-memory state store" Oct 31 13:50:12.041896 kubelet[2335]: I1031 13:50:12.041867 2335 kubelet.go:475] "Attempting to sync node with API server" Oct 31 13:50:12.041932 kubelet[2335]: I1031 13:50:12.041898 2335 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 13:50:12.042495 kubelet[2335]: I1031 13:50:12.042394 2335 kubelet.go:387] "Adding apiserver pod source" Oct 31 13:50:12.042495 kubelet[2335]: I1031 13:50:12.042416 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 13:50:12.042495 kubelet[2335]: E1031 13:50:12.042457 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 13:50:12.042991 kubelet[2335]: E1031 13:50:12.042964 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 13:50:12.043513 kubelet[2335]: I1031 13:50:12.043498 2335 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 13:50:12.044237 kubelet[2335]: I1031 13:50:12.044215 2335 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 13:50:12.044341 kubelet[2335]: I1031 13:50:12.044329 2335 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 13:50:12.044435 kubelet[2335]: W1031 13:50:12.044424 2335 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 13:50:12.049657 kubelet[2335]: I1031 13:50:12.049630 2335 server.go:1262] "Started kubelet" Oct 31 13:50:12.049898 kubelet[2335]: I1031 13:50:12.049875 2335 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 13:50:12.050101 kubelet[2335]: I1031 13:50:12.050066 2335 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 13:50:12.050247 kubelet[2335]: I1031 13:50:12.050229 2335 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 13:50:12.050454 kubelet[2335]: I1031 13:50:12.050442 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 13:50:12.050785 kubelet[2335]: I1031 13:50:12.050759 2335 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 13:50:12.051266 kubelet[2335]: I1031 13:50:12.051248 2335 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 13:50:12.051396 kubelet[2335]: E1031 13:50:12.051377 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:12.051963 kubelet[2335]: I1031 13:50:12.051944 2335 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 13:50:12.052002 kubelet[2335]: I1031 13:50:12.051972 2335 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 13:50:12.052050 kubelet[2335]: I1031 13:50:12.052035 2335 reconciler.go:29] "Reconciler: start to sync state" Oct 31 13:50:12.054392 kubelet[2335]: E1031 13:50:12.054342 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Oct 31 13:50:12.055017 kubelet[2335]: E1031 13:50:12.054474 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 13:50:12.055131 kubelet[2335]: I1031 13:50:12.055080 2335 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 13:50:12.055515 kubelet[2335]: I1031 13:50:12.055495 2335 server.go:310] "Adding debug handlers to kubelet server" Oct 31 13:50:12.055950 kubelet[2335]: E1031 13:50:12.054849 2335 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187397a4825d92ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 13:50:12.049597164 +0000 UTC m=+0.687140320,LastTimestamp:2025-10-31 13:50:12.049597164 +0000 UTC m=+0.687140320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 13:50:12.056897 kubelet[2335]: I1031 13:50:12.056872 2335 factory.go:223] Registration of the containerd container factory successfully Oct 31 13:50:12.056897 kubelet[2335]: I1031 13:50:12.056891 2335 factory.go:223] Registration of the systemd container factory successfully Oct 31 13:50:12.067377 kubelet[2335]: I1031 13:50:12.067345 2335 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 13:50:12.068341 kubelet[2335]: I1031 13:50:12.068313 2335 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 13:50:12.068341 kubelet[2335]: I1031 13:50:12.068335 2335 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 13:50:12.068424 kubelet[2335]: I1031 13:50:12.068365 2335 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 13:50:12.068424 kubelet[2335]: E1031 13:50:12.068403 2335 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 13:50:12.070919 kubelet[2335]: I1031 13:50:12.070892 2335 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 13:50:12.070919 kubelet[2335]: I1031 13:50:12.070911 2335 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 13:50:12.070994 kubelet[2335]: I1031 13:50:12.070927 2335 state_mem.go:36] "Initialized new in-memory state store" Oct 31 13:50:12.070994 kubelet[2335]: E1031 13:50:12.070925 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 13:50:12.072555 kubelet[2335]: I1031 13:50:12.072537 2335 policy_none.go:49] "None policy: Start" Oct 31 13:50:12.072625 kubelet[2335]: I1031 13:50:12.072561 2335 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 13:50:12.072625 kubelet[2335]: I1031 13:50:12.072573 2335 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 13:50:12.074086 kubelet[2335]: I1031 13:50:12.074068 2335 policy_none.go:47] "Start" Oct 31 13:50:12.077889 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 13:50:12.092119 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 13:50:12.095225 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 13:50:12.104214 kubelet[2335]: E1031 13:50:12.104190 2335 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 13:50:12.104511 kubelet[2335]: I1031 13:50:12.104492 2335 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 13:50:12.104595 kubelet[2335]: I1031 13:50:12.104569 2335 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 13:50:12.104872 kubelet[2335]: I1031 13:50:12.104853 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 13:50:12.106197 kubelet[2335]: E1031 13:50:12.106153 2335 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 13:50:12.106251 kubelet[2335]: E1031 13:50:12.106230 2335 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 13:50:12.177849 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 31 13:50:12.203914 kubelet[2335]: E1031 13:50:12.203831 2335 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:50:12.208080 kubelet[2335]: I1031 13:50:12.207977 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:50:12.208339 systemd[1]: Created slice kubepods-burstable-pod571280483e747d8e05a2fbb99fce4135.slice - libcontainer container kubepods-burstable-pod571280483e747d8e05a2fbb99fce4135.slice. Oct 31 13:50:12.208655 kubelet[2335]: E1031 13:50:12.208629 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Oct 31 13:50:12.215287 kubelet[2335]: E1031 13:50:12.215259 2335 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:50:12.218132 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 31 13:50:12.219792 kubelet[2335]: E1031 13:50:12.219773 2335 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:50:12.254107 kubelet[2335]: I1031 13:50:12.254071 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:12.254186 kubelet[2335]: I1031 13:50:12.254137 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:12.254186 kubelet[2335]: I1031 13:50:12.254166 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 13:50:12.254345 kubelet[2335]: I1031 13:50:12.254182 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/571280483e747d8e05a2fbb99fce4135-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"571280483e747d8e05a2fbb99fce4135\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:12.254345 kubelet[2335]: I1031 13:50:12.254201 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/571280483e747d8e05a2fbb99fce4135-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"571280483e747d8e05a2fbb99fce4135\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:12.254345 kubelet[2335]: I1031 13:50:12.254218 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:12.254345 kubelet[2335]: I1031 13:50:12.254268 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:12.254345 kubelet[2335]: I1031 13:50:12.254338 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:12.254488 kubelet[2335]: I1031 13:50:12.254355 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/571280483e747d8e05a2fbb99fce4135-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"571280483e747d8e05a2fbb99fce4135\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:12.254755 kubelet[2335]: E1031 13:50:12.254727 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Oct 31 13:50:12.410545 kubelet[2335]: I1031 13:50:12.410519 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:50:12.410861 kubelet[2335]: E1031 13:50:12.410831 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Oct 31 13:50:12.506588 kubelet[2335]: E1031 13:50:12.506465 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:12.507470 containerd[1578]: time="2025-10-31T13:50:12.507411954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 31 13:50:12.517050 kubelet[2335]: E1031 13:50:12.517022 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:12.517545 containerd[1578]: time="2025-10-31T13:50:12.517517369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:571280483e747d8e05a2fbb99fce4135,Namespace:kube-system,Attempt:0,}" Oct 31 13:50:12.522202 kubelet[2335]: E1031 13:50:12.522181 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:12.522567 containerd[1578]: time="2025-10-31T13:50:12.522540000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 31 13:50:12.655782 kubelet[2335]: E1031 13:50:12.655731 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Oct 31 13:50:12.812941 kubelet[2335]: I1031 13:50:12.812834 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:50:12.813180 kubelet[2335]: E1031 13:50:12.813157 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Oct 31 13:50:12.888177 kubelet[2335]: E1031 13:50:12.888129 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 13:50:12.950888 kubelet[2335]: E1031 13:50:12.950849 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 13:50:12.976043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992880056.mount: Deactivated successfully. Oct 31 13:50:12.981456 containerd[1578]: time="2025-10-31T13:50:12.981420022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 13:50:12.982877 containerd[1578]: time="2025-10-31T13:50:12.982847170Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 13:50:12.984684 containerd[1578]: time="2025-10-31T13:50:12.984659238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 31 13:50:12.985250 containerd[1578]: time="2025-10-31T13:50:12.985224656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 31 13:50:12.987313 containerd[1578]: time="2025-10-31T13:50:12.986523805Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 13:50:12.987700 containerd[1578]: time="2025-10-31T13:50:12.987679085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 31 13:50:12.988086 containerd[1578]: time="2025-10-31T13:50:12.988045570Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 13:50:12.992196 containerd[1578]: time="2025-10-31T13:50:12.990717246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 13:50:12.992532 containerd[1578]: time="2025-10-31T13:50:12.992501101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 482.982168ms" Oct 31 13:50:12.993507 containerd[1578]: time="2025-10-31T13:50:12.993466346Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 469.359255ms" Oct 31 13:50:12.994703 containerd[1578]: time="2025-10-31T13:50:12.994665508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.292269ms" Oct 31 13:50:13.016226 containerd[1578]: time="2025-10-31T13:50:13.015682485Z" level=info msg="connecting to shim 2da93ea15e88f027aa75edbf4559be7d35404a98b5e86a9012bd1f7203839ebe" address="unix:///run/containerd/s/84a0504fb951e1c520726d27868c4697ea0b0a68f819ef1f96310cdf8df95b29" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:50:13.017450 containerd[1578]: time="2025-10-31T13:50:13.017418246Z" level=info msg="connecting to shim cbedfcd5816b7621dc89f8bd959dcca015fd96bb393187d0f8b624550233941b" address="unix:///run/containerd/s/3821546fcdf8257f55416a362102790b4a32c1d34208b482d73e27e765be526e" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:50:13.020490 containerd[1578]: time="2025-10-31T13:50:13.020451569Z" level=info msg="connecting to shim 9b59cbaff01b7a85e7d0c54b28029c8e819c8493d30d223b755705b2fd9fab82" address="unix:///run/containerd/s/9ef81283fabc6d401aeeeca31074b087b86b95faf5d1bd98e23274dd57d2ad91" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:50:13.041461 systemd[1]: Started cri-containerd-2da93ea15e88f027aa75edbf4559be7d35404a98b5e86a9012bd1f7203839ebe.scope - libcontainer container 2da93ea15e88f027aa75edbf4559be7d35404a98b5e86a9012bd1f7203839ebe. Oct 31 13:50:13.042648 systemd[1]: Started cri-containerd-cbedfcd5816b7621dc89f8bd959dcca015fd96bb393187d0f8b624550233941b.scope - libcontainer container cbedfcd5816b7621dc89f8bd959dcca015fd96bb393187d0f8b624550233941b. Oct 31 13:50:13.048078 systemd[1]: Started cri-containerd-9b59cbaff01b7a85e7d0c54b28029c8e819c8493d30d223b755705b2fd9fab82.scope - libcontainer container 9b59cbaff01b7a85e7d0c54b28029c8e819c8493d30d223b755705b2fd9fab82. Oct 31 13:50:13.085727 containerd[1578]: time="2025-10-31T13:50:13.085265155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbedfcd5816b7621dc89f8bd959dcca015fd96bb393187d0f8b624550233941b\"" Oct 31 13:50:13.085810 containerd[1578]: time="2025-10-31T13:50:13.085770101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2da93ea15e88f027aa75edbf4559be7d35404a98b5e86a9012bd1f7203839ebe\"" Oct 31 13:50:13.087632 kubelet[2335]: E1031 13:50:13.087265 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:13.087632 kubelet[2335]: E1031 13:50:13.087509 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:13.087726 containerd[1578]: time="2025-10-31T13:50:13.087424729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:571280483e747d8e05a2fbb99fce4135,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b59cbaff01b7a85e7d0c54b28029c8e819c8493d30d223b755705b2fd9fab82\"" Oct 31 13:50:13.087905 kubelet[2335]: E1031 13:50:13.087879 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:13.091090 containerd[1578]: time="2025-10-31T13:50:13.091058476Z" level=info msg="CreateContainer within sandbox \"cbedfcd5816b7621dc89f8bd959dcca015fd96bb393187d0f8b624550233941b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 13:50:13.092845 containerd[1578]: time="2025-10-31T13:50:13.092818235Z" level=info msg="CreateContainer within sandbox \"9b59cbaff01b7a85e7d0c54b28029c8e819c8493d30d223b755705b2fd9fab82\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 13:50:13.094620 containerd[1578]: time="2025-10-31T13:50:13.094592539Z" level=info msg="CreateContainer within sandbox \"2da93ea15e88f027aa75edbf4559be7d35404a98b5e86a9012bd1f7203839ebe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 13:50:13.099022 containerd[1578]: time="2025-10-31T13:50:13.098976513Z" level=info msg="Container 055ce94b2b45315aff7f6e2c1e4b04574e7ef14e7d271934d5aa89c6ded76a38: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:50:13.103673 containerd[1578]: time="2025-10-31T13:50:13.103008311Z" level=info msg="Container 42b7f513a2bd639e3c88c8150781cf8731a3aa5a3dd7870cdc9d55e315a569d8: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:50:13.107354 containerd[1578]: time="2025-10-31T13:50:13.107266520Z" level=info msg="CreateContainer within sandbox \"cbedfcd5816b7621dc89f8bd959dcca015fd96bb393187d0f8b624550233941b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"055ce94b2b45315aff7f6e2c1e4b04574e7ef14e7d271934d5aa89c6ded76a38\"" Oct 31 13:50:13.107804 containerd[1578]: time="2025-10-31T13:50:13.107777676Z" level=info msg="StartContainer for \"055ce94b2b45315aff7f6e2c1e4b04574e7ef14e7d271934d5aa89c6ded76a38\"" Oct 31 13:50:13.108183 containerd[1578]: time="2025-10-31T13:50:13.108129452Z" level=info msg="Container 46a38be6113d99946bc2e7b23441501b5ea017abcd91b5311c5b7b9d200e720b: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:50:13.109096 containerd[1578]: time="2025-10-31T13:50:13.109068388Z" level=info msg="connecting to shim 055ce94b2b45315aff7f6e2c1e4b04574e7ef14e7d271934d5aa89c6ded76a38" address="unix:///run/containerd/s/3821546fcdf8257f55416a362102790b4a32c1d34208b482d73e27e765be526e" protocol=ttrpc version=3 Oct 31 13:50:13.112882 containerd[1578]: time="2025-10-31T13:50:13.112848494Z" level=info msg="CreateContainer within sandbox \"9b59cbaff01b7a85e7d0c54b28029c8e819c8493d30d223b755705b2fd9fab82\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"42b7f513a2bd639e3c88c8150781cf8731a3aa5a3dd7870cdc9d55e315a569d8\"" Oct 31 13:50:13.114065 containerd[1578]: time="2025-10-31T13:50:13.113985195Z" level=info msg="StartContainer for \"42b7f513a2bd639e3c88c8150781cf8731a3aa5a3dd7870cdc9d55e315a569d8\"" Oct 31 13:50:13.115099 containerd[1578]: time="2025-10-31T13:50:13.115052742Z" level=info msg="connecting to shim 42b7f513a2bd639e3c88c8150781cf8731a3aa5a3dd7870cdc9d55e315a569d8" address="unix:///run/containerd/s/9ef81283fabc6d401aeeeca31074b087b86b95faf5d1bd98e23274dd57d2ad91" protocol=ttrpc version=3 Oct 31 13:50:13.117151 containerd[1578]: time="2025-10-31T13:50:13.117118442Z" level=info msg="CreateContainer within sandbox \"2da93ea15e88f027aa75edbf4559be7d35404a98b5e86a9012bd1f7203839ebe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"46a38be6113d99946bc2e7b23441501b5ea017abcd91b5311c5b7b9d200e720b\"" Oct 31 13:50:13.118162 containerd[1578]: time="2025-10-31T13:50:13.118128134Z" level=info msg="StartContainer for \"46a38be6113d99946bc2e7b23441501b5ea017abcd91b5311c5b7b9d200e720b\"" Oct 31 13:50:13.119052 containerd[1578]: time="2025-10-31T13:50:13.119027927Z" level=info msg="connecting to shim 46a38be6113d99946bc2e7b23441501b5ea017abcd91b5311c5b7b9d200e720b" address="unix:///run/containerd/s/84a0504fb951e1c520726d27868c4697ea0b0a68f819ef1f96310cdf8df95b29" protocol=ttrpc version=3 Oct 31 13:50:13.132411 systemd[1]: Started cri-containerd-055ce94b2b45315aff7f6e2c1e4b04574e7ef14e7d271934d5aa89c6ded76a38.scope - libcontainer container 055ce94b2b45315aff7f6e2c1e4b04574e7ef14e7d271934d5aa89c6ded76a38. Oct 31 13:50:13.135770 systemd[1]: Started cri-containerd-46a38be6113d99946bc2e7b23441501b5ea017abcd91b5311c5b7b9d200e720b.scope - libcontainer container 46a38be6113d99946bc2e7b23441501b5ea017abcd91b5311c5b7b9d200e720b. Oct 31 13:50:13.140257 systemd[1]: Started cri-containerd-42b7f513a2bd639e3c88c8150781cf8731a3aa5a3dd7870cdc9d55e315a569d8.scope - libcontainer container 42b7f513a2bd639e3c88c8150781cf8731a3aa5a3dd7870cdc9d55e315a569d8. Oct 31 13:50:13.187849 containerd[1578]: time="2025-10-31T13:50:13.187726710Z" level=info msg="StartContainer for \"46a38be6113d99946bc2e7b23441501b5ea017abcd91b5311c5b7b9d200e720b\" returns successfully" Oct 31 13:50:13.188049 containerd[1578]: time="2025-10-31T13:50:13.187800712Z" level=info msg="StartContainer for \"42b7f513a2bd639e3c88c8150781cf8731a3aa5a3dd7870cdc9d55e315a569d8\" returns successfully" Oct 31 13:50:13.189040 containerd[1578]: time="2025-10-31T13:50:13.188980723Z" level=info msg="StartContainer for \"055ce94b2b45315aff7f6e2c1e4b04574e7ef14e7d271934d5aa89c6ded76a38\" returns successfully" Oct 31 13:50:13.196047 kubelet[2335]: E1031 13:50:13.196004 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 13:50:13.616183 kubelet[2335]: I1031 13:50:13.615875 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:50:14.081919 kubelet[2335]: E1031 13:50:14.081818 2335 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:50:14.082001 kubelet[2335]: E1031 13:50:14.081985 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:14.083552 kubelet[2335]: E1031 13:50:14.083517 2335 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:50:14.083687 kubelet[2335]: E1031 13:50:14.083665 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:14.084341 kubelet[2335]: E1031 13:50:14.084321 2335 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:50:14.084478 kubelet[2335]: E1031 13:50:14.084458 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:14.884805 kubelet[2335]: I1031 13:50:14.884770 2335 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 13:50:14.884805 kubelet[2335]: E1031 13:50:14.884807 2335 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 13:50:14.900281 kubelet[2335]: E1031 13:50:14.900235 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.001287 kubelet[2335]: E1031 13:50:15.000886 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.086834 kubelet[2335]: E1031 13:50:15.086802 2335 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:50:15.086940 kubelet[2335]: E1031 13:50:15.086920 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:15.087163 kubelet[2335]: E1031 13:50:15.087145 2335 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:50:15.087247 kubelet[2335]: E1031 13:50:15.087231 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:15.101582 kubelet[2335]: E1031 13:50:15.101556 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.202564 kubelet[2335]: E1031 13:50:15.202462 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.302742 kubelet[2335]: E1031 13:50:15.302704 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.403414 kubelet[2335]: E1031 13:50:15.403376 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.504322 kubelet[2335]: E1031 13:50:15.504004 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.604498 kubelet[2335]: E1031 13:50:15.604462 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.705040 kubelet[2335]: E1031 13:50:15.704987 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.805834 kubelet[2335]: E1031 13:50:15.805732 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:15.906429 kubelet[2335]: E1031 13:50:15.906387 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:16.007037 kubelet[2335]: E1031 13:50:16.006998 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:16.089028 kubelet[2335]: E1031 13:50:16.088690 2335 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 13:50:16.089398 kubelet[2335]: E1031 13:50:16.089300 2335 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:16.107983 kubelet[2335]: E1031 13:50:16.107955 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:16.208564 kubelet[2335]: E1031 13:50:16.208529 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:16.309698 kubelet[2335]: E1031 13:50:16.309652 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:16.353312 kubelet[2335]: I1031 13:50:16.352536 2335 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 13:50:16.360998 kubelet[2335]: I1031 13:50:16.360956 2335 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:16.366331 kubelet[2335]: I1031 13:50:16.364760 2335 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:16.796047 systemd[1]: Reload requested from client PID 2626 ('systemctl') (unit session-7.scope)... Oct 31 13:50:16.796066 systemd[1]: Reloading... Oct 31 13:50:16.856312 zram_generator::config[2670]: No configuration found. Oct 31 13:50:17.018078 systemd[1]: Reloading finished in 221 ms. Oct 31 13:50:17.045411 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:50:17.056324 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 13:50:17.056626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:50:17.056678 systemd[1]: kubelet.service: Consumed 907ms CPU time, 123.9M memory peak. Oct 31 13:50:17.058231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 13:50:17.214162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 13:50:17.218662 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 13:50:17.262216 kubelet[2712]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 13:50:17.262502 kubelet[2712]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 13:50:17.262502 kubelet[2712]: I1031 13:50:17.262424 2712 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 13:50:17.268122 kubelet[2712]: I1031 13:50:17.268086 2712 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 13:50:17.268122 kubelet[2712]: I1031 13:50:17.268110 2712 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 13:50:17.268216 kubelet[2712]: I1031 13:50:17.268137 2712 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 13:50:17.268216 kubelet[2712]: I1031 13:50:17.268143 2712 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 13:50:17.268412 kubelet[2712]: I1031 13:50:17.268380 2712 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 13:50:17.269561 kubelet[2712]: I1031 13:50:17.269544 2712 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 13:50:17.271910 kubelet[2712]: I1031 13:50:17.271867 2712 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 13:50:17.275816 kubelet[2712]: I1031 13:50:17.275778 2712 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 13:50:17.279301 kubelet[2712]: I1031 13:50:17.278853 2712 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 13:50:17.279301 kubelet[2712]: I1031 13:50:17.279035 2712 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 13:50:17.279301 kubelet[2712]: I1031 13:50:17.279053 2712 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 13:50:17.279301 kubelet[2712]: I1031 13:50:17.279181 2712 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 13:50:17.279474 kubelet[2712]: I1031 13:50:17.279188 2712 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 13:50:17.279474 kubelet[2712]: I1031 13:50:17.279209 2712 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 13:50:17.280017 kubelet[2712]: I1031 13:50:17.279997 2712 state_mem.go:36] "Initialized new in-memory state store" Oct 31 13:50:17.280153 kubelet[2712]: I1031 13:50:17.280142 2712 kubelet.go:475] "Attempting to sync node with API server" Oct 31 13:50:17.280182 kubelet[2712]: I1031 13:50:17.280164 2712 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 13:50:17.280203 kubelet[2712]: I1031 13:50:17.280186 2712 kubelet.go:387] "Adding apiserver pod source" Oct 31 13:50:17.280203 kubelet[2712]: I1031 13:50:17.280196 2712 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 13:50:17.282293 kubelet[2712]: I1031 13:50:17.280977 2712 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 13:50:17.282293 kubelet[2712]: I1031 13:50:17.282109 2712 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 13:50:17.282293 kubelet[2712]: I1031 13:50:17.282174 2712 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 13:50:17.285372 kubelet[2712]: I1031 13:50:17.285343 2712 server.go:1262] "Started kubelet" Oct 31 13:50:17.285536 kubelet[2712]: I1031 13:50:17.285501 2712 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 13:50:17.285631 kubelet[2712]: I1031 13:50:17.285610 2712 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 13:50:17.285857 kubelet[2712]: I1031 13:50:17.285820 2712 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 13:50:17.285903 kubelet[2712]: I1031 13:50:17.285868 2712 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 13:50:17.285941 kubelet[2712]: I1031 13:50:17.285834 2712 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 13:50:17.287364 kubelet[2712]: I1031 13:50:17.286695 2712 server.go:310] "Adding debug handlers to kubelet server" Oct 31 13:50:17.289179 kubelet[2712]: I1031 13:50:17.289160 2712 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 13:50:17.289544 kubelet[2712]: I1031 13:50:17.289236 2712 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 13:50:17.289544 kubelet[2712]: I1031 13:50:17.289359 2712 reconciler.go:29] "Reconciler: start to sync state" Oct 31 13:50:17.292415 kubelet[2712]: I1031 13:50:17.288183 2712 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 13:50:17.294231 kubelet[2712]: E1031 13:50:17.294196 2712 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 13:50:17.299055 kubelet[2712]: I1031 13:50:17.297723 2712 factory.go:223] Registration of the systemd container factory successfully Oct 31 13:50:17.299055 kubelet[2712]: I1031 13:50:17.297886 2712 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 13:50:17.309900 kubelet[2712]: I1031 13:50:17.309824 2712 factory.go:223] Registration of the containerd container factory successfully Oct 31 13:50:17.310682 kubelet[2712]: I1031 13:50:17.310637 2712 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 13:50:17.311676 kubelet[2712]: I1031 13:50:17.311642 2712 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 13:50:17.311676 kubelet[2712]: I1031 13:50:17.311671 2712 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 13:50:17.311750 kubelet[2712]: I1031 13:50:17.311694 2712 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 13:50:17.311750 kubelet[2712]: E1031 13:50:17.311742 2712 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 13:50:17.313577 kubelet[2712]: E1031 13:50:17.313552 2712 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 13:50:17.342957 kubelet[2712]: I1031 13:50:17.342899 2712 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 13:50:17.342957 kubelet[2712]: I1031 13:50:17.342943 2712 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 13:50:17.342957 kubelet[2712]: I1031 13:50:17.342964 2712 state_mem.go:36] "Initialized new in-memory state store" Oct 31 13:50:17.343124 kubelet[2712]: I1031 13:50:17.343077 2712 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 13:50:17.343124 kubelet[2712]: I1031 13:50:17.343088 2712 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 13:50:17.343163 kubelet[2712]: I1031 13:50:17.343133 2712 policy_none.go:49] "None policy: Start" Oct 31 13:50:17.343163 kubelet[2712]: I1031 13:50:17.343144 2712 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 13:50:17.343163 kubelet[2712]: I1031 13:50:17.343153 2712 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 13:50:17.343286 kubelet[2712]: I1031 13:50:17.343266 2712 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 31 13:50:17.343332 kubelet[2712]: I1031 13:50:17.343294 2712 policy_none.go:47] "Start" Oct 31 13:50:17.348087 kubelet[2712]: E1031 13:50:17.348067 2712 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 13:50:17.348568 kubelet[2712]: I1031 13:50:17.348550 2712 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 13:50:17.348700 kubelet[2712]: I1031 13:50:17.348656 2712 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 13:50:17.349454 kubelet[2712]: I1031 13:50:17.349440 2712 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 13:50:17.350781 kubelet[2712]: E1031 13:50:17.350759 2712 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 13:50:17.413269 kubelet[2712]: I1031 13:50:17.413231 2712 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:17.413526 kubelet[2712]: I1031 13:50:17.413255 2712 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 13:50:17.413625 kubelet[2712]: I1031 13:50:17.413305 2712 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:17.419031 kubelet[2712]: E1031 13:50:17.418998 2712 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:17.419796 kubelet[2712]: E1031 13:50:17.419775 2712 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 13:50:17.419866 kubelet[2712]: E1031 13:50:17.419846 2712 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:17.452399 kubelet[2712]: I1031 13:50:17.452382 2712 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 13:50:17.460096 kubelet[2712]: I1031 13:50:17.460059 2712 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 13:50:17.460152 kubelet[2712]: I1031 13:50:17.460141 2712 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 13:50:17.591432 kubelet[2712]: I1031 13:50:17.591328 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 13:50:17.591432 kubelet[2712]: I1031 13:50:17.591369 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/571280483e747d8e05a2fbb99fce4135-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"571280483e747d8e05a2fbb99fce4135\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:17.591432 kubelet[2712]: I1031 13:50:17.591390 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/571280483e747d8e05a2fbb99fce4135-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"571280483e747d8e05a2fbb99fce4135\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:17.591432 kubelet[2712]: I1031 13:50:17.591410 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/571280483e747d8e05a2fbb99fce4135-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"571280483e747d8e05a2fbb99fce4135\") " pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:17.591432 kubelet[2712]: I1031 13:50:17.591426 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:17.591634 kubelet[2712]: I1031 13:50:17.591440 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:17.591634 kubelet[2712]: I1031 13:50:17.591454 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:17.591634 kubelet[2712]: I1031 13:50:17.591468 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:17.591634 kubelet[2712]: I1031 13:50:17.591484 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 13:50:17.719893 kubelet[2712]: E1031 13:50:17.719795 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:17.720005 kubelet[2712]: E1031 13:50:17.719991 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:17.720071 kubelet[2712]: E1031 13:50:17.719994 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:18.280892 kubelet[2712]: I1031 13:50:18.280854 2712 apiserver.go:52] "Watching apiserver" Oct 31 13:50:18.289740 kubelet[2712]: I1031 13:50:18.289696 2712 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 13:50:18.329672 kubelet[2712]: I1031 13:50:18.329611 2712 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 13:50:18.330069 kubelet[2712]: I1031 13:50:18.330026 2712 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:18.334301 kubelet[2712]: E1031 13:50:18.331811 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:18.337739 kubelet[2712]: E1031 13:50:18.337703 2712 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 13:50:18.337874 kubelet[2712]: E1031 13:50:18.337854 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:18.341841 kubelet[2712]: E1031 13:50:18.341810 2712 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 13:50:18.342002 kubelet[2712]: E1031 13:50:18.341979 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:18.371648 kubelet[2712]: I1031 13:50:18.371556 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.371508501 podStartE2EDuration="2.371508501s" podCreationTimestamp="2025-10-31 13:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:50:18.370931056 +0000 UTC m=+1.148311331" watchObservedRunningTime="2025-10-31 13:50:18.371508501 +0000 UTC m=+1.148888776" Oct 31 13:50:18.388589 kubelet[2712]: I1031 13:50:18.388508 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.388493529 podStartE2EDuration="2.388493529s" podCreationTimestamp="2025-10-31 13:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:50:18.388449772 +0000 UTC m=+1.165830087" watchObservedRunningTime="2025-10-31 13:50:18.388493529 +0000 UTC m=+1.165873804" Oct 31 13:50:18.388589 kubelet[2712]: I1031 13:50:18.388587 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.388582403 podStartE2EDuration="2.388582403s" podCreationTimestamp="2025-10-31 13:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:50:18.379560104 +0000 UTC m=+1.156940379" watchObservedRunningTime="2025-10-31 13:50:18.388582403 +0000 UTC m=+1.165962678" Oct 31 13:50:19.330682 kubelet[2712]: E1031 13:50:19.330654 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:19.330975 kubelet[2712]: E1031 13:50:19.330756 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:21.133162 kubelet[2712]: E1031 13:50:21.133117 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:21.708176 kubelet[2712]: E1031 13:50:21.708135 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:24.519950 kubelet[2712]: I1031 13:50:24.519918 2712 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 13:50:24.520323 containerd[1578]: time="2025-10-31T13:50:24.520216651Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 13:50:24.520591 kubelet[2712]: I1031 13:50:24.520392 2712 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 13:50:25.630742 systemd[1]: Created slice kubepods-besteffort-pode884b063_4b6b_45ff_aec9_3013ad921c23.slice - libcontainer container kubepods-besteffort-pode884b063_4b6b_45ff_aec9_3013ad921c23.slice. Oct 31 13:50:25.644367 kubelet[2712]: I1031 13:50:25.644332 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e884b063-4b6b-45ff-aec9-3013ad921c23-kube-proxy\") pod \"kube-proxy-zkz6q\" (UID: \"e884b063-4b6b-45ff-aec9-3013ad921c23\") " pod="kube-system/kube-proxy-zkz6q" Oct 31 13:50:25.644664 kubelet[2712]: I1031 13:50:25.644367 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e884b063-4b6b-45ff-aec9-3013ad921c23-xtables-lock\") pod \"kube-proxy-zkz6q\" (UID: \"e884b063-4b6b-45ff-aec9-3013ad921c23\") " pod="kube-system/kube-proxy-zkz6q" Oct 31 13:50:25.644664 kubelet[2712]: I1031 13:50:25.644416 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e884b063-4b6b-45ff-aec9-3013ad921c23-lib-modules\") pod \"kube-proxy-zkz6q\" (UID: \"e884b063-4b6b-45ff-aec9-3013ad921c23\") " pod="kube-system/kube-proxy-zkz6q" Oct 31 13:50:25.644664 kubelet[2712]: I1031 13:50:25.644431 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qj4r\" (UniqueName: \"kubernetes.io/projected/e884b063-4b6b-45ff-aec9-3013ad921c23-kube-api-access-4qj4r\") pod \"kube-proxy-zkz6q\" (UID: \"e884b063-4b6b-45ff-aec9-3013ad921c23\") " pod="kube-system/kube-proxy-zkz6q" Oct 31 13:50:25.727007 systemd[1]: Created slice kubepods-besteffort-podafcdb09a_d340_44ec_adc8_482aa9307d37.slice - libcontainer container kubepods-besteffort-podafcdb09a_d340_44ec_adc8_482aa9307d37.slice. Oct 31 13:50:25.744724 kubelet[2712]: I1031 13:50:25.744682 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkckw\" (UniqueName: \"kubernetes.io/projected/afcdb09a-d340-44ec-adc8-482aa9307d37-kube-api-access-kkckw\") pod \"tigera-operator-65cdcdfd6d-xsn2w\" (UID: \"afcdb09a-d340-44ec-adc8-482aa9307d37\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xsn2w" Oct 31 13:50:25.744813 kubelet[2712]: I1031 13:50:25.744770 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/afcdb09a-d340-44ec-adc8-482aa9307d37-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-xsn2w\" (UID: \"afcdb09a-d340-44ec-adc8-482aa9307d37\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xsn2w" Oct 31 13:50:25.949771 kubelet[2712]: E1031 13:50:25.949673 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:25.950778 containerd[1578]: time="2025-10-31T13:50:25.950743882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zkz6q,Uid:e884b063-4b6b-45ff-aec9-3013ad921c23,Namespace:kube-system,Attempt:0,}" Oct 31 13:50:25.968218 containerd[1578]: time="2025-10-31T13:50:25.968124568Z" level=info msg="connecting to shim 3f732cb60aa9bf02a69e60623b82fdee24526f2a4ede0331369de675b7f55f1c" address="unix:///run/containerd/s/0de5b5085ff04cc54b8ac773631832e069176376e6399cb41a369b0aced7c983" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:50:25.989526 systemd[1]: Started cri-containerd-3f732cb60aa9bf02a69e60623b82fdee24526f2a4ede0331369de675b7f55f1c.scope - libcontainer container 3f732cb60aa9bf02a69e60623b82fdee24526f2a4ede0331369de675b7f55f1c. Oct 31 13:50:26.010826 containerd[1578]: time="2025-10-31T13:50:26.010784463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zkz6q,Uid:e884b063-4b6b-45ff-aec9-3013ad921c23,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f732cb60aa9bf02a69e60623b82fdee24526f2a4ede0331369de675b7f55f1c\"" Oct 31 13:50:26.011485 kubelet[2712]: E1031 13:50:26.011460 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:26.017912 containerd[1578]: time="2025-10-31T13:50:26.017877403Z" level=info msg="CreateContainer within sandbox \"3f732cb60aa9bf02a69e60623b82fdee24526f2a4ede0331369de675b7f55f1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 13:50:26.026026 containerd[1578]: time="2025-10-31T13:50:26.025994417Z" level=info msg="Container cf75919a082e151ec7e68c1e42b0aef8ee28a6490b995aa9114674d4ae512640: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:50:26.033518 containerd[1578]: time="2025-10-31T13:50:26.033471589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xsn2w,Uid:afcdb09a-d340-44ec-adc8-482aa9307d37,Namespace:tigera-operator,Attempt:0,}" Oct 31 13:50:26.034251 containerd[1578]: time="2025-10-31T13:50:26.034202745Z" level=info msg="CreateContainer within sandbox \"3f732cb60aa9bf02a69e60623b82fdee24526f2a4ede0331369de675b7f55f1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf75919a082e151ec7e68c1e42b0aef8ee28a6490b995aa9114674d4ae512640\"" Oct 31 13:50:26.035385 containerd[1578]: time="2025-10-31T13:50:26.034923612Z" level=info msg="StartContainer for \"cf75919a082e151ec7e68c1e42b0aef8ee28a6490b995aa9114674d4ae512640\"" Oct 31 13:50:26.036388 containerd[1578]: time="2025-10-31T13:50:26.036360143Z" level=info msg="connecting to shim cf75919a082e151ec7e68c1e42b0aef8ee28a6490b995aa9114674d4ae512640" address="unix:///run/containerd/s/0de5b5085ff04cc54b8ac773631832e069176376e6399cb41a369b0aced7c983" protocol=ttrpc version=3 Oct 31 13:50:26.054576 containerd[1578]: time="2025-10-31T13:50:26.054526345Z" level=info msg="connecting to shim 2d5e32210d20288edcd930313a4c41fa0564e9e73fcdce90462f0aefe163430c" address="unix:///run/containerd/s/1b2cd2e617ab7a38dc56fb4e5ae1d510f3689e1faa2c2244256db275b51a5184" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:50:26.060444 systemd[1]: Started cri-containerd-cf75919a082e151ec7e68c1e42b0aef8ee28a6490b995aa9114674d4ae512640.scope - libcontainer container cf75919a082e151ec7e68c1e42b0aef8ee28a6490b995aa9114674d4ae512640. Oct 31 13:50:26.077421 systemd[1]: Started cri-containerd-2d5e32210d20288edcd930313a4c41fa0564e9e73fcdce90462f0aefe163430c.scope - libcontainer container 2d5e32210d20288edcd930313a4c41fa0564e9e73fcdce90462f0aefe163430c. Oct 31 13:50:26.103341 containerd[1578]: time="2025-10-31T13:50:26.102566289Z" level=info msg="StartContainer for \"cf75919a082e151ec7e68c1e42b0aef8ee28a6490b995aa9114674d4ae512640\" returns successfully" Oct 31 13:50:26.115921 containerd[1578]: time="2025-10-31T13:50:26.115883580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xsn2w,Uid:afcdb09a-d340-44ec-adc8-482aa9307d37,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2d5e32210d20288edcd930313a4c41fa0564e9e73fcdce90462f0aefe163430c\"" Oct 31 13:50:26.118366 containerd[1578]: time="2025-10-31T13:50:26.117734689Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 13:50:26.345537 kubelet[2712]: E1031 13:50:26.344370 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:26.357883 kubelet[2712]: I1031 13:50:26.357053 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zkz6q" podStartSLOduration=1.357038559 podStartE2EDuration="1.357038559s" podCreationTimestamp="2025-10-31 13:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:50:26.356723302 +0000 UTC m=+9.134103577" watchObservedRunningTime="2025-10-31 13:50:26.357038559 +0000 UTC m=+9.134418794" Oct 31 13:50:26.459871 kubelet[2712]: E1031 13:50:26.459783 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:26.758692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount993563096.mount: Deactivated successfully. Oct 31 13:50:27.345977 kubelet[2712]: E1031 13:50:27.345894 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:27.604219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424184058.mount: Deactivated successfully. Oct 31 13:50:28.413153 containerd[1578]: time="2025-10-31T13:50:28.413091496Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:28.413686 containerd[1578]: time="2025-10-31T13:50:28.413626887Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 31 13:50:28.414621 containerd[1578]: time="2025-10-31T13:50:28.414584507Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:28.417516 containerd[1578]: time="2025-10-31T13:50:28.417473378Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:28.418513 containerd[1578]: time="2025-10-31T13:50:28.418415426Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.30004462s" Oct 31 13:50:28.418513 containerd[1578]: time="2025-10-31T13:50:28.418443086Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 31 13:50:28.423453 containerd[1578]: time="2025-10-31T13:50:28.423424687Z" level=info msg="CreateContainer within sandbox \"2d5e32210d20288edcd930313a4c41fa0564e9e73fcdce90462f0aefe163430c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 13:50:28.431214 containerd[1578]: time="2025-10-31T13:50:28.431172348Z" level=info msg="Container 42893d491b46562239a74c6e66173031511254bc156892d185ff6f7937249129: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:50:28.445906 containerd[1578]: time="2025-10-31T13:50:28.445858400Z" level=info msg="CreateContainer within sandbox \"2d5e32210d20288edcd930313a4c41fa0564e9e73fcdce90462f0aefe163430c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"42893d491b46562239a74c6e66173031511254bc156892d185ff6f7937249129\"" Oct 31 13:50:28.446385 containerd[1578]: time="2025-10-31T13:50:28.446352001Z" level=info msg="StartContainer for \"42893d491b46562239a74c6e66173031511254bc156892d185ff6f7937249129\"" Oct 31 13:50:28.448878 containerd[1578]: time="2025-10-31T13:50:28.448570742Z" level=info msg="connecting to shim 42893d491b46562239a74c6e66173031511254bc156892d185ff6f7937249129" address="unix:///run/containerd/s/1b2cd2e617ab7a38dc56fb4e5ae1d510f3689e1faa2c2244256db275b51a5184" protocol=ttrpc version=3 Oct 31 13:50:28.488422 systemd[1]: Started cri-containerd-42893d491b46562239a74c6e66173031511254bc156892d185ff6f7937249129.scope - libcontainer container 42893d491b46562239a74c6e66173031511254bc156892d185ff6f7937249129. Oct 31 13:50:28.514763 containerd[1578]: time="2025-10-31T13:50:28.514668522Z" level=info msg="StartContainer for \"42893d491b46562239a74c6e66173031511254bc156892d185ff6f7937249129\" returns successfully" Oct 31 13:50:29.359371 kubelet[2712]: I1031 13:50:29.358942 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-xsn2w" podStartSLOduration=2.056690411 podStartE2EDuration="4.358927006s" podCreationTimestamp="2025-10-31 13:50:25 +0000 UTC" firstStartedPulling="2025-10-31 13:50:26.1172579 +0000 UTC m=+8.894638175" lastFinishedPulling="2025-10-31 13:50:28.419494535 +0000 UTC m=+11.196874770" observedRunningTime="2025-10-31 13:50:29.358583368 +0000 UTC m=+12.135963643" watchObservedRunningTime="2025-10-31 13:50:29.358927006 +0000 UTC m=+12.136307281" Oct 31 13:50:31.143552 kubelet[2712]: E1031 13:50:31.143516 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:31.717173 kubelet[2712]: E1031 13:50:31.716880 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:33.536617 update_engine[1553]: I20251031 13:50:33.536509 1553 update_attempter.cc:509] Updating boot flags... Oct 31 13:50:33.795297 sudo[1783]: pam_unix(sudo:session): session closed for user root Oct 31 13:50:33.797581 sshd[1782]: Connection closed by 10.0.0.1 port 37842 Oct 31 13:50:33.798066 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Oct 31 13:50:33.801776 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:37842.service: Deactivated successfully. Oct 31 13:50:33.803596 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 13:50:33.803812 systemd[1]: session-7.scope: Consumed 7.088s CPU time, 213.5M memory peak. Oct 31 13:50:33.804766 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. Oct 31 13:50:33.806718 systemd-logind[1551]: Removed session 7. Oct 31 13:50:40.684449 systemd[1]: Created slice kubepods-besteffort-pod1a21a4f4_3b5c_443f_95fe_180c094ed9d5.slice - libcontainer container kubepods-besteffort-pod1a21a4f4_3b5c_443f_95fe_180c094ed9d5.slice. Oct 31 13:50:40.744726 kubelet[2712]: I1031 13:50:40.744609 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a21a4f4-3b5c-443f-95fe-180c094ed9d5-tigera-ca-bundle\") pod \"calico-typha-6659cd544-ns2p9\" (UID: \"1a21a4f4-3b5c-443f-95fe-180c094ed9d5\") " pod="calico-system/calico-typha-6659cd544-ns2p9" Oct 31 13:50:40.744726 kubelet[2712]: I1031 13:50:40.744675 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1a21a4f4-3b5c-443f-95fe-180c094ed9d5-typha-certs\") pod \"calico-typha-6659cd544-ns2p9\" (UID: \"1a21a4f4-3b5c-443f-95fe-180c094ed9d5\") " pod="calico-system/calico-typha-6659cd544-ns2p9" Oct 31 13:50:40.744726 kubelet[2712]: I1031 13:50:40.744700 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7t9m\" (UniqueName: \"kubernetes.io/projected/1a21a4f4-3b5c-443f-95fe-180c094ed9d5-kube-api-access-b7t9m\") pod \"calico-typha-6659cd544-ns2p9\" (UID: \"1a21a4f4-3b5c-443f-95fe-180c094ed9d5\") " pod="calico-system/calico-typha-6659cd544-ns2p9" Oct 31 13:50:40.884534 systemd[1]: Created slice kubepods-besteffort-podc7f9327b_320c_403e_8083_a3deebb22737.slice - libcontainer container kubepods-besteffort-podc7f9327b_320c_403e_8083_a3deebb22737.slice. Oct 31 13:50:40.947706 kubelet[2712]: I1031 13:50:40.947382 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c7f9327b-320c-403e-8083-a3deebb22737-flexvol-driver-host\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947706 kubelet[2712]: I1031 13:50:40.947416 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7f9327b-320c-403e-8083-a3deebb22737-lib-modules\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947706 kubelet[2712]: I1031 13:50:40.947435 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c7f9327b-320c-403e-8083-a3deebb22737-var-lib-calico\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947706 kubelet[2712]: I1031 13:50:40.947451 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6sdj\" (UniqueName: \"kubernetes.io/projected/c7f9327b-320c-403e-8083-a3deebb22737-kube-api-access-v6sdj\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947706 kubelet[2712]: I1031 13:50:40.947469 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c7f9327b-320c-403e-8083-a3deebb22737-policysync\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947878 kubelet[2712]: I1031 13:50:40.947482 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c7f9327b-320c-403e-8083-a3deebb22737-var-run-calico\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947878 kubelet[2712]: I1031 13:50:40.947496 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9327b-320c-403e-8083-a3deebb22737-tigera-ca-bundle\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947878 kubelet[2712]: I1031 13:50:40.947511 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c7f9327b-320c-403e-8083-a3deebb22737-node-certs\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947878 kubelet[2712]: I1031 13:50:40.947529 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c7f9327b-320c-403e-8083-a3deebb22737-cni-log-dir\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947878 kubelet[2712]: I1031 13:50:40.947542 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c7f9327b-320c-403e-8083-a3deebb22737-cni-net-dir\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947996 kubelet[2712]: I1031 13:50:40.947554 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7f9327b-320c-403e-8083-a3deebb22737-xtables-lock\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.947996 kubelet[2712]: I1031 13:50:40.947568 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c7f9327b-320c-403e-8083-a3deebb22737-cni-bin-dir\") pod \"calico-node-r7c5h\" (UID: \"c7f9327b-320c-403e-8083-a3deebb22737\") " pod="calico-system/calico-node-r7c5h" Oct 31 13:50:40.990211 kubelet[2712]: E1031 13:50:40.989882 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:40.990592 containerd[1578]: time="2025-10-31T13:50:40.990561801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6659cd544-ns2p9,Uid:1a21a4f4-3b5c-443f-95fe-180c094ed9d5,Namespace:calico-system,Attempt:0,}" Oct 31 13:50:41.043842 containerd[1578]: time="2025-10-31T13:50:41.043805699Z" level=info msg="connecting to shim e429d8330dfddc822adb385001e0c0bda6e94f3c0baccb418ae51c0a0a1228e6" address="unix:///run/containerd/s/789dcf0caf1c2670b5ea8135aff6b2c84fb75c215e64a7a90b836afedc52d9b6" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:50:41.062051 kubelet[2712]: E1031 13:50:41.061600 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.062051 kubelet[2712]: W1031 13:50:41.061626 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.062051 kubelet[2712]: E1031 13:50:41.061649 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.070641 kubelet[2712]: E1031 13:50:41.070558 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:50:41.089375 kubelet[2712]: E1031 13:50:41.089340 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.089528 kubelet[2712]: W1031 13:50:41.089398 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.089528 kubelet[2712]: E1031 13:50:41.089419 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.091486 systemd[1]: Started cri-containerd-e429d8330dfddc822adb385001e0c0bda6e94f3c0baccb418ae51c0a0a1228e6.scope - libcontainer container e429d8330dfddc822adb385001e0c0bda6e94f3c0baccb418ae51c0a0a1228e6. Oct 31 13:50:41.134999 kubelet[2712]: E1031 13:50:41.134492 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.135728 kubelet[2712]: W1031 13:50:41.135455 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.135728 kubelet[2712]: E1031 13:50:41.135488 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.136336 kubelet[2712]: E1031 13:50:41.136240 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.136899 kubelet[2712]: W1031 13:50:41.136513 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.136899 kubelet[2712]: E1031 13:50:41.136568 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.137234 kubelet[2712]: E1031 13:50:41.137153 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.137234 kubelet[2712]: W1031 13:50:41.137167 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.137234 kubelet[2712]: E1031 13:50:41.137178 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.137541 kubelet[2712]: E1031 13:50:41.137515 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.137615 kubelet[2712]: W1031 13:50:41.137595 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.137677 kubelet[2712]: E1031 13:50:41.137666 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.138065 kubelet[2712]: E1031 13:50:41.138043 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.138065 kubelet[2712]: W1031 13:50:41.138062 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.138149 kubelet[2712]: E1031 13:50:41.138076 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.138243 kubelet[2712]: E1031 13:50:41.138229 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.138243 kubelet[2712]: W1031 13:50:41.138240 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.138311 kubelet[2712]: E1031 13:50:41.138251 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.138523 kubelet[2712]: E1031 13:50:41.138510 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.138551 kubelet[2712]: W1031 13:50:41.138523 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.138551 kubelet[2712]: E1031 13:50:41.138535 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.138715 kubelet[2712]: E1031 13:50:41.138685 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.138715 kubelet[2712]: W1031 13:50:41.138698 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.138715 kubelet[2712]: E1031 13:50:41.138706 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.138856 kubelet[2712]: E1031 13:50:41.138843 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.138856 kubelet[2712]: W1031 13:50:41.138854 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.138905 kubelet[2712]: E1031 13:50:41.138863 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.138984 kubelet[2712]: E1031 13:50:41.138974 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.139016 kubelet[2712]: W1031 13:50:41.138985 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.139016 kubelet[2712]: E1031 13:50:41.139004 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.139149 kubelet[2712]: E1031 13:50:41.139138 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.139149 kubelet[2712]: W1031 13:50:41.139148 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.139196 kubelet[2712]: E1031 13:50:41.139156 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.139300 kubelet[2712]: E1031 13:50:41.139271 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.139300 kubelet[2712]: W1031 13:50:41.139291 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.139300 kubelet[2712]: E1031 13:50:41.139298 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.139429 kubelet[2712]: E1031 13:50:41.139418 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.139429 kubelet[2712]: W1031 13:50:41.139428 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.139474 kubelet[2712]: E1031 13:50:41.139435 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.139546 kubelet[2712]: E1031 13:50:41.139536 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.139546 kubelet[2712]: W1031 13:50:41.139545 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.139590 kubelet[2712]: E1031 13:50:41.139552 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.139675 kubelet[2712]: E1031 13:50:41.139667 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.139700 kubelet[2712]: W1031 13:50:41.139675 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.139700 kubelet[2712]: E1031 13:50:41.139682 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.139813 kubelet[2712]: E1031 13:50:41.139800 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.139813 kubelet[2712]: W1031 13:50:41.139809 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.139859 kubelet[2712]: E1031 13:50:41.139816 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.139947 kubelet[2712]: E1031 13:50:41.139936 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.139947 kubelet[2712]: W1031 13:50:41.139945 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.140014 kubelet[2712]: E1031 13:50:41.139953 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.140080 kubelet[2712]: E1031 13:50:41.140069 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.140080 kubelet[2712]: W1031 13:50:41.140079 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.140137 kubelet[2712]: E1031 13:50:41.140086 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.140207 kubelet[2712]: E1031 13:50:41.140196 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.140207 kubelet[2712]: W1031 13:50:41.140205 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.140256 kubelet[2712]: E1031 13:50:41.140212 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.140335 kubelet[2712]: E1031 13:50:41.140324 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.140335 kubelet[2712]: W1031 13:50:41.140334 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.140401 kubelet[2712]: E1031 13:50:41.140341 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.144103 containerd[1578]: time="2025-10-31T13:50:41.144066763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6659cd544-ns2p9,Uid:1a21a4f4-3b5c-443f-95fe-180c094ed9d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"e429d8330dfddc822adb385001e0c0bda6e94f3c0baccb418ae51c0a0a1228e6\"" Oct 31 13:50:41.145404 kubelet[2712]: E1031 13:50:41.145372 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:41.147458 containerd[1578]: time="2025-10-31T13:50:41.147424652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 13:50:41.150589 kubelet[2712]: E1031 13:50:41.150339 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.150589 kubelet[2712]: W1031 13:50:41.150356 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.150589 kubelet[2712]: E1031 13:50:41.150369 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.150589 kubelet[2712]: I1031 13:50:41.150392 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fda5ab0e-82e2-4b7d-827a-809d2fbca767-socket-dir\") pod \"csi-node-driver-msghp\" (UID: \"fda5ab0e-82e2-4b7d-827a-809d2fbca767\") " pod="calico-system/csi-node-driver-msghp" Oct 31 13:50:41.150803 kubelet[2712]: E1031 13:50:41.150789 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.151388 kubelet[2712]: W1031 13:50:41.151370 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.151454 kubelet[2712]: E1031 13:50:41.151444 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.151520 kubelet[2712]: I1031 13:50:41.151508 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fda5ab0e-82e2-4b7d-827a-809d2fbca767-registration-dir\") pod \"csi-node-driver-msghp\" (UID: \"fda5ab0e-82e2-4b7d-827a-809d2fbca767\") " pod="calico-system/csi-node-driver-msghp" Oct 31 13:50:41.151985 kubelet[2712]: E1031 13:50:41.151961 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.151985 kubelet[2712]: W1031 13:50:41.151976 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.152342 kubelet[2712]: E1031 13:50:41.152304 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.153405 kubelet[2712]: E1031 13:50:41.153379 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.153405 kubelet[2712]: W1031 13:50:41.153398 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.153481 kubelet[2712]: E1031 13:50:41.153412 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.153653 kubelet[2712]: E1031 13:50:41.153628 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.153653 kubelet[2712]: W1031 13:50:41.153642 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.153653 kubelet[2712]: E1031 13:50:41.153652 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.154254 kubelet[2712]: E1031 13:50:41.153969 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.154363 kubelet[2712]: W1031 13:50:41.154346 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.154556 kubelet[2712]: E1031 13:50:41.154539 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.154827 kubelet[2712]: I1031 13:50:41.154801 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjrkd\" (UniqueName: \"kubernetes.io/projected/fda5ab0e-82e2-4b7d-827a-809d2fbca767-kube-api-access-mjrkd\") pod \"csi-node-driver-msghp\" (UID: \"fda5ab0e-82e2-4b7d-827a-809d2fbca767\") " pod="calico-system/csi-node-driver-msghp" Oct 31 13:50:41.155465 kubelet[2712]: E1031 13:50:41.155394 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.155465 kubelet[2712]: W1031 13:50:41.155406 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.155465 kubelet[2712]: E1031 13:50:41.155418 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.155920 kubelet[2712]: E1031 13:50:41.155891 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.156133 kubelet[2712]: W1031 13:50:41.156104 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.156133 kubelet[2712]: E1031 13:50:41.156131 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.156882 kubelet[2712]: E1031 13:50:41.156854 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.156882 kubelet[2712]: W1031 13:50:41.156871 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.156882 kubelet[2712]: E1031 13:50:41.156884 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.157705 kubelet[2712]: E1031 13:50:41.157508 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.157705 kubelet[2712]: W1031 13:50:41.157521 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.157705 kubelet[2712]: E1031 13:50:41.157533 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.157705 kubelet[2712]: I1031 13:50:41.157556 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fda5ab0e-82e2-4b7d-827a-809d2fbca767-varrun\") pod \"csi-node-driver-msghp\" (UID: \"fda5ab0e-82e2-4b7d-827a-809d2fbca767\") " pod="calico-system/csi-node-driver-msghp" Oct 31 13:50:41.158380 kubelet[2712]: E1031 13:50:41.158357 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.158380 kubelet[2712]: W1031 13:50:41.158374 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.158572 kubelet[2712]: E1031 13:50:41.158389 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.158572 kubelet[2712]: I1031 13:50:41.158414 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fda5ab0e-82e2-4b7d-827a-809d2fbca767-kubelet-dir\") pod \"csi-node-driver-msghp\" (UID: \"fda5ab0e-82e2-4b7d-827a-809d2fbca767\") " pod="calico-system/csi-node-driver-msghp" Oct 31 13:50:41.159078 kubelet[2712]: E1031 13:50:41.158936 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.159078 kubelet[2712]: W1031 13:50:41.158953 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.159078 kubelet[2712]: E1031 13:50:41.158967 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.160295 kubelet[2712]: E1031 13:50:41.159644 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.160295 kubelet[2712]: W1031 13:50:41.159660 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.160295 kubelet[2712]: E1031 13:50:41.159672 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.160756 kubelet[2712]: E1031 13:50:41.160619 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.160756 kubelet[2712]: W1031 13:50:41.160632 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.160756 kubelet[2712]: E1031 13:50:41.160644 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.161363 kubelet[2712]: E1031 13:50:41.161337 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.161431 kubelet[2712]: W1031 13:50:41.161356 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.161431 kubelet[2712]: E1031 13:50:41.161388 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.193459 kubelet[2712]: E1031 13:50:41.193423 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:41.194031 containerd[1578]: time="2025-10-31T13:50:41.193963685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r7c5h,Uid:c7f9327b-320c-403e-8083-a3deebb22737,Namespace:calico-system,Attempt:0,}" Oct 31 13:50:41.211857 containerd[1578]: time="2025-10-31T13:50:41.210964213Z" level=info msg="connecting to shim 9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19" address="unix:///run/containerd/s/4abae51999d13dc633525e3624754c6be1a33c9668481e0378b859e4fbf92171" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:50:41.236463 systemd[1]: Started cri-containerd-9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19.scope - libcontainer container 9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19. Oct 31 13:50:41.259608 containerd[1578]: time="2025-10-31T13:50:41.259516699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r7c5h,Uid:c7f9327b-320c-403e-8083-a3deebb22737,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19\"" Oct 31 13:50:41.259679 kubelet[2712]: E1031 13:50:41.259582 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.259679 kubelet[2712]: W1031 13:50:41.259598 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.259679 kubelet[2712]: E1031 13:50:41.259628 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.259932 kubelet[2712]: E1031 13:50:41.259899 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.259956 kubelet[2712]: W1031 13:50:41.259910 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.259956 kubelet[2712]: E1031 13:50:41.259952 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.260244 kubelet[2712]: E1031 13:50:41.260227 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.260244 kubelet[2712]: W1031 13:50:41.260244 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.260313 kubelet[2712]: E1031 13:50:41.260257 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.260785 kubelet[2712]: E1031 13:50:41.260500 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:41.260844 kubelet[2712]: E1031 13:50:41.260805 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.260844 kubelet[2712]: W1031 13:50:41.260817 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.261327 kubelet[2712]: E1031 13:50:41.260841 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.261327 kubelet[2712]: E1031 13:50:41.261017 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.261327 kubelet[2712]: W1031 13:50:41.261025 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.261327 kubelet[2712]: E1031 13:50:41.261044 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.261327 kubelet[2712]: E1031 13:50:41.261199 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.261327 kubelet[2712]: W1031 13:50:41.261207 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.261327 kubelet[2712]: E1031 13:50:41.261216 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.261490 kubelet[2712]: E1031 13:50:41.261451 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.261490 kubelet[2712]: W1031 13:50:41.261464 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.261490 kubelet[2712]: E1031 13:50:41.261472 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.261919 kubelet[2712]: E1031 13:50:41.261599 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.261919 kubelet[2712]: W1031 13:50:41.261610 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.261919 kubelet[2712]: E1031 13:50:41.261626 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.261919 kubelet[2712]: E1031 13:50:41.261764 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.261919 kubelet[2712]: W1031 13:50:41.261801 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.261919 kubelet[2712]: E1031 13:50:41.261810 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.262115 kubelet[2712]: E1031 13:50:41.262045 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.262115 kubelet[2712]: W1031 13:50:41.262056 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.262115 kubelet[2712]: E1031 13:50:41.262065 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.262342 kubelet[2712]: E1031 13:50:41.262251 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.262342 kubelet[2712]: W1031 13:50:41.262260 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.262342 kubelet[2712]: E1031 13:50:41.262268 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.263222 kubelet[2712]: E1031 13:50:41.262430 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.263222 kubelet[2712]: W1031 13:50:41.262447 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.263222 kubelet[2712]: E1031 13:50:41.262454 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.263222 kubelet[2712]: E1031 13:50:41.262679 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.263222 kubelet[2712]: W1031 13:50:41.262686 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.263222 kubelet[2712]: E1031 13:50:41.262693 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.263222 kubelet[2712]: E1031 13:50:41.262838 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.263222 kubelet[2712]: W1031 13:50:41.262856 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.263222 kubelet[2712]: E1031 13:50:41.262863 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.263222 kubelet[2712]: E1031 13:50:41.263060 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.263492 kubelet[2712]: W1031 13:50:41.263068 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.263492 kubelet[2712]: E1031 13:50:41.263076 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.263492 kubelet[2712]: E1031 13:50:41.263234 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.263492 kubelet[2712]: W1031 13:50:41.263241 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.263492 kubelet[2712]: E1031 13:50:41.263249 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.263492 kubelet[2712]: E1031 13:50:41.263444 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.263492 kubelet[2712]: W1031 13:50:41.263452 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.263492 kubelet[2712]: E1031 13:50:41.263460 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.263641 kubelet[2712]: E1031 13:50:41.263589 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.263641 kubelet[2712]: W1031 13:50:41.263598 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.263641 kubelet[2712]: E1031 13:50:41.263605 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.263744 kubelet[2712]: E1031 13:50:41.263729 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.263744 kubelet[2712]: W1031 13:50:41.263738 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.263791 kubelet[2712]: E1031 13:50:41.263745 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.263929 kubelet[2712]: E1031 13:50:41.263917 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.263929 kubelet[2712]: W1031 13:50:41.263927 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.264397 kubelet[2712]: E1031 13:50:41.263935 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.264397 kubelet[2712]: E1031 13:50:41.264115 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.264397 kubelet[2712]: W1031 13:50:41.264123 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.264397 kubelet[2712]: E1031 13:50:41.264130 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.264595 kubelet[2712]: E1031 13:50:41.264579 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.264595 kubelet[2712]: W1031 13:50:41.264592 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.264656 kubelet[2712]: E1031 13:50:41.264603 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.265126 kubelet[2712]: E1031 13:50:41.264780 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.265126 kubelet[2712]: W1031 13:50:41.264794 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.265126 kubelet[2712]: E1031 13:50:41.264802 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.265126 kubelet[2712]: E1031 13:50:41.264964 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.265126 kubelet[2712]: W1031 13:50:41.264971 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.265126 kubelet[2712]: E1031 13:50:41.264978 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.265774 kubelet[2712]: E1031 13:50:41.265147 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.265774 kubelet[2712]: W1031 13:50:41.265155 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.265774 kubelet[2712]: E1031 13:50:41.265164 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:41.274778 kubelet[2712]: E1031 13:50:41.274722 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:41.274778 kubelet[2712]: W1031 13:50:41.274738 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:41.274778 kubelet[2712]: E1031 13:50:41.274749 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:42.261704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209769558.mount: Deactivated successfully. Oct 31 13:50:42.312739 kubelet[2712]: E1031 13:50:42.312673 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:50:43.723013 containerd[1578]: time="2025-10-31T13:50:43.722947095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:43.724056 containerd[1578]: time="2025-10-31T13:50:43.724008388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 31 13:50:43.724832 containerd[1578]: time="2025-10-31T13:50:43.724801668Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:43.727478 containerd[1578]: time="2025-10-31T13:50:43.727402103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:43.728455 containerd[1578]: time="2025-10-31T13:50:43.728425304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.580966399s" Oct 31 13:50:43.728519 containerd[1578]: time="2025-10-31T13:50:43.728459876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 31 13:50:43.730587 containerd[1578]: time="2025-10-31T13:50:43.730515639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 13:50:43.747771 containerd[1578]: time="2025-10-31T13:50:43.747738383Z" level=info msg="CreateContainer within sandbox \"e429d8330dfddc822adb385001e0c0bda6e94f3c0baccb418ae51c0a0a1228e6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 13:50:43.755424 containerd[1578]: time="2025-10-31T13:50:43.755385555Z" level=info msg="Container 38254b75808378587a511bca4be75f942f879c2be00827db6ab69c2f99eba956: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:50:43.761721 containerd[1578]: time="2025-10-31T13:50:43.761689615Z" level=info msg="CreateContainer within sandbox \"e429d8330dfddc822adb385001e0c0bda6e94f3c0baccb418ae51c0a0a1228e6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"38254b75808378587a511bca4be75f942f879c2be00827db6ab69c2f99eba956\"" Oct 31 13:50:43.762298 containerd[1578]: time="2025-10-31T13:50:43.762264137Z" level=info msg="StartContainer for \"38254b75808378587a511bca4be75f942f879c2be00827db6ab69c2f99eba956\"" Oct 31 13:50:43.763367 containerd[1578]: time="2025-10-31T13:50:43.763288618Z" level=info msg="connecting to shim 38254b75808378587a511bca4be75f942f879c2be00827db6ab69c2f99eba956" address="unix:///run/containerd/s/789dcf0caf1c2670b5ea8135aff6b2c84fb75c215e64a7a90b836afedc52d9b6" protocol=ttrpc version=3 Oct 31 13:50:43.782414 systemd[1]: Started cri-containerd-38254b75808378587a511bca4be75f942f879c2be00827db6ab69c2f99eba956.scope - libcontainer container 38254b75808378587a511bca4be75f942f879c2be00827db6ab69c2f99eba956. Oct 31 13:50:43.817810 containerd[1578]: time="2025-10-31T13:50:43.817780883Z" level=info msg="StartContainer for \"38254b75808378587a511bca4be75f942f879c2be00827db6ab69c2f99eba956\" returns successfully" Oct 31 13:50:44.312755 kubelet[2712]: E1031 13:50:44.312606 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:50:44.385768 kubelet[2712]: E1031 13:50:44.385680 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:44.395738 kubelet[2712]: I1031 13:50:44.395685 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6659cd544-ns2p9" podStartSLOduration=1.813399451 podStartE2EDuration="4.395667726s" podCreationTimestamp="2025-10-31 13:50:40 +0000 UTC" firstStartedPulling="2025-10-31 13:50:41.146875802 +0000 UTC m=+23.924256077" lastFinishedPulling="2025-10-31 13:50:43.729144077 +0000 UTC m=+26.506524352" observedRunningTime="2025-10-31 13:50:44.395520156 +0000 UTC m=+27.172900431" watchObservedRunningTime="2025-10-31 13:50:44.395667726 +0000 UTC m=+27.173048001" Oct 31 13:50:44.465675 kubelet[2712]: E1031 13:50:44.465619 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.465675 kubelet[2712]: W1031 13:50:44.465658 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.465812 kubelet[2712]: E1031 13:50:44.465680 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.465897 kubelet[2712]: E1031 13:50:44.465867 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.465930 kubelet[2712]: W1031 13:50:44.465879 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.465930 kubelet[2712]: E1031 13:50:44.465919 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.466080 kubelet[2712]: E1031 13:50:44.466068 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.466080 kubelet[2712]: W1031 13:50:44.466078 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.466132 kubelet[2712]: E1031 13:50:44.466085 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.466304 kubelet[2712]: E1031 13:50:44.466214 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.466304 kubelet[2712]: W1031 13:50:44.466228 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.466304 kubelet[2712]: E1031 13:50:44.466237 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.466471 kubelet[2712]: E1031 13:50:44.466438 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.466471 kubelet[2712]: W1031 13:50:44.466451 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.466471 kubelet[2712]: E1031 13:50:44.466460 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.466628 kubelet[2712]: E1031 13:50:44.466615 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.466628 kubelet[2712]: W1031 13:50:44.466626 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.466670 kubelet[2712]: E1031 13:50:44.466633 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.466767 kubelet[2712]: E1031 13:50:44.466756 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.466794 kubelet[2712]: W1031 13:50:44.466766 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.466794 kubelet[2712]: E1031 13:50:44.466774 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.466906 kubelet[2712]: E1031 13:50:44.466896 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.466937 kubelet[2712]: W1031 13:50:44.466906 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.466937 kubelet[2712]: E1031 13:50:44.466914 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.467054 kubelet[2712]: E1031 13:50:44.467044 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.467054 kubelet[2712]: W1031 13:50:44.467053 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.467102 kubelet[2712]: E1031 13:50:44.467062 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.467196 kubelet[2712]: E1031 13:50:44.467184 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.467196 kubelet[2712]: W1031 13:50:44.467196 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.467239 kubelet[2712]: E1031 13:50:44.467204 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.467353 kubelet[2712]: E1031 13:50:44.467343 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.467382 kubelet[2712]: W1031 13:50:44.467353 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.467382 kubelet[2712]: E1031 13:50:44.467361 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.467498 kubelet[2712]: E1031 13:50:44.467486 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.467521 kubelet[2712]: W1031 13:50:44.467497 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.467521 kubelet[2712]: E1031 13:50:44.467505 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.467693 kubelet[2712]: E1031 13:50:44.467679 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.467693 kubelet[2712]: W1031 13:50:44.467691 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.467748 kubelet[2712]: E1031 13:50:44.467700 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.467882 kubelet[2712]: E1031 13:50:44.467868 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.467882 kubelet[2712]: W1031 13:50:44.467879 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.467925 kubelet[2712]: E1031 13:50:44.467887 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.468021 kubelet[2712]: E1031 13:50:44.468010 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.468021 kubelet[2712]: W1031 13:50:44.468020 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.468064 kubelet[2712]: E1031 13:50:44.468027 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.483500 kubelet[2712]: E1031 13:50:44.483461 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.483500 kubelet[2712]: W1031 13:50:44.483484 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.483500 kubelet[2712]: E1031 13:50:44.483498 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.483723 kubelet[2712]: E1031 13:50:44.483693 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.483723 kubelet[2712]: W1031 13:50:44.483705 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.483723 kubelet[2712]: E1031 13:50:44.483714 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.484010 kubelet[2712]: E1031 13:50:44.483975 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.484010 kubelet[2712]: W1031 13:50:44.484000 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.484069 kubelet[2712]: E1031 13:50:44.484034 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.484286 kubelet[2712]: E1031 13:50:44.484229 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.484286 kubelet[2712]: W1031 13:50:44.484259 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.484286 kubelet[2712]: E1031 13:50:44.484269 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.484518 kubelet[2712]: E1031 13:50:44.484463 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.484518 kubelet[2712]: W1031 13:50:44.484475 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.484518 kubelet[2712]: E1031 13:50:44.484484 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.484679 kubelet[2712]: E1031 13:50:44.484664 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.484679 kubelet[2712]: W1031 13:50:44.484676 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.484725 kubelet[2712]: E1031 13:50:44.484684 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.484872 kubelet[2712]: E1031 13:50:44.484861 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.484897 kubelet[2712]: W1031 13:50:44.484872 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.484897 kubelet[2712]: E1031 13:50:44.484881 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.485038 kubelet[2712]: E1031 13:50:44.485028 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.485062 kubelet[2712]: W1031 13:50:44.485038 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.485062 kubelet[2712]: E1031 13:50:44.485046 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.485226 kubelet[2712]: E1031 13:50:44.485213 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.485226 kubelet[2712]: W1031 13:50:44.485225 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.485289 kubelet[2712]: E1031 13:50:44.485233 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.485435 kubelet[2712]: E1031 13:50:44.485421 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.485435 kubelet[2712]: W1031 13:50:44.485434 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.485475 kubelet[2712]: E1031 13:50:44.485442 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.485870 kubelet[2712]: E1031 13:50:44.485820 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.485870 kubelet[2712]: W1031 13:50:44.485852 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.485870 kubelet[2712]: E1031 13:50:44.485862 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.486072 kubelet[2712]: E1031 13:50:44.486056 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.486072 kubelet[2712]: W1031 13:50:44.486069 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.486120 kubelet[2712]: E1031 13:50:44.486078 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.486443 kubelet[2712]: E1031 13:50:44.486429 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.486443 kubelet[2712]: W1031 13:50:44.486442 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.486502 kubelet[2712]: E1031 13:50:44.486451 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.486614 kubelet[2712]: E1031 13:50:44.486600 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.486614 kubelet[2712]: W1031 13:50:44.486611 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.486661 kubelet[2712]: E1031 13:50:44.486620 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.486808 kubelet[2712]: E1031 13:50:44.486796 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.486830 kubelet[2712]: W1031 13:50:44.486808 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.486830 kubelet[2712]: E1031 13:50:44.486819 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.487065 kubelet[2712]: E1031 13:50:44.487047 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.487065 kubelet[2712]: W1031 13:50:44.487063 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.487116 kubelet[2712]: E1031 13:50:44.487074 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.487263 kubelet[2712]: E1031 13:50:44.487248 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.487263 kubelet[2712]: W1031 13:50:44.487260 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.487322 kubelet[2712]: E1031 13:50:44.487269 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.487471 kubelet[2712]: E1031 13:50:44.487458 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 13:50:44.487493 kubelet[2712]: W1031 13:50:44.487470 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 13:50:44.487493 kubelet[2712]: E1031 13:50:44.487478 2712 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 13:50:44.763565 containerd[1578]: time="2025-10-31T13:50:44.762919499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:44.763565 containerd[1578]: time="2025-10-31T13:50:44.763531465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 31 13:50:44.764185 containerd[1578]: time="2025-10-31T13:50:44.764156876Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:44.766250 containerd[1578]: time="2025-10-31T13:50:44.766215611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:44.766960 containerd[1578]: time="2025-10-31T13:50:44.766841022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.036291692s" Oct 31 13:50:44.766960 containerd[1578]: time="2025-10-31T13:50:44.766873033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 31 13:50:44.770463 containerd[1578]: time="2025-10-31T13:50:44.770382098Z" level=info msg="CreateContainer within sandbox \"9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 13:50:44.777376 containerd[1578]: time="2025-10-31T13:50:44.776974723Z" level=info msg="Container 578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:50:44.790346 containerd[1578]: time="2025-10-31T13:50:44.790311625Z" level=info msg="CreateContainer within sandbox \"9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56\"" Oct 31 13:50:44.791725 containerd[1578]: time="2025-10-31T13:50:44.790706559Z" level=info msg="StartContainer for \"578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56\"" Oct 31 13:50:44.792586 containerd[1578]: time="2025-10-31T13:50:44.792560184Z" level=info msg="connecting to shim 578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56" address="unix:///run/containerd/s/4abae51999d13dc633525e3624754c6be1a33c9668481e0378b859e4fbf92171" protocol=ttrpc version=3 Oct 31 13:50:44.814420 systemd[1]: Started cri-containerd-578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56.scope - libcontainer container 578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56. Oct 31 13:50:44.845372 containerd[1578]: time="2025-10-31T13:50:44.845247930Z" level=info msg="StartContainer for \"578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56\" returns successfully" Oct 31 13:50:44.859246 systemd[1]: cri-containerd-578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56.scope: Deactivated successfully. Oct 31 13:50:44.859745 systemd[1]: cri-containerd-578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56.scope: Consumed 29ms CPU time, 6.3M memory peak, 4.5M written to disk. Oct 31 13:50:44.878392 containerd[1578]: time="2025-10-31T13:50:44.878180127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56\" id:\"578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56\" pid:3427 exited_at:{seconds:1761918644 nanos:877383538}" Oct 31 13:50:44.882853 containerd[1578]: time="2025-10-31T13:50:44.882809530Z" level=info msg="received exit event container_id:\"578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56\" id:\"578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56\" pid:3427 exited_at:{seconds:1761918644 nanos:877383538}" Oct 31 13:50:44.917554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-578d0c7c23d90627b7161ebcb6e3c5b5464a294af7640c4e0b571f3827067d56-rootfs.mount: Deactivated successfully. Oct 31 13:50:45.390621 kubelet[2712]: E1031 13:50:45.389358 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:45.390621 kubelet[2712]: E1031 13:50:45.389638 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:45.392500 containerd[1578]: time="2025-10-31T13:50:45.392339088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 13:50:46.312468 kubelet[2712]: E1031 13:50:46.312378 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:50:46.392629 kubelet[2712]: E1031 13:50:46.392594 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:48.107609 containerd[1578]: time="2025-10-31T13:50:48.107569021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:48.108081 containerd[1578]: time="2025-10-31T13:50:48.108049399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 31 13:50:48.108797 containerd[1578]: time="2025-10-31T13:50:48.108756163Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:48.111626 containerd[1578]: time="2025-10-31T13:50:48.111597501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:48.112039 containerd[1578]: time="2025-10-31T13:50:48.112012781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.719478151s" Oct 31 13:50:48.112039 containerd[1578]: time="2025-10-31T13:50:48.112044110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 31 13:50:48.115778 containerd[1578]: time="2025-10-31T13:50:48.115741175Z" level=info msg="CreateContainer within sandbox \"9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 13:50:48.128177 containerd[1578]: time="2025-10-31T13:50:48.128131384Z" level=info msg="Container 64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:50:48.136728 containerd[1578]: time="2025-10-31T13:50:48.136676526Z" level=info msg="CreateContainer within sandbox \"9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb\"" Oct 31 13:50:48.137245 containerd[1578]: time="2025-10-31T13:50:48.137181391Z" level=info msg="StartContainer for \"64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb\"" Oct 31 13:50:48.139164 containerd[1578]: time="2025-10-31T13:50:48.138739640Z" level=info msg="connecting to shim 64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb" address="unix:///run/containerd/s/4abae51999d13dc633525e3624754c6be1a33c9668481e0378b859e4fbf92171" protocol=ttrpc version=3 Oct 31 13:50:48.158452 systemd[1]: Started cri-containerd-64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb.scope - libcontainer container 64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb. Oct 31 13:50:48.192495 containerd[1578]: time="2025-10-31T13:50:48.192449633Z" level=info msg="StartContainer for \"64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb\" returns successfully" Oct 31 13:50:48.312111 kubelet[2712]: E1031 13:50:48.312014 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:50:48.401566 kubelet[2712]: E1031 13:50:48.401515 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:48.684398 systemd[1]: cri-containerd-64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb.scope: Deactivated successfully. Oct 31 13:50:48.684755 systemd[1]: cri-containerd-64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb.scope: Consumed 435ms CPU time, 177.8M memory peak, 3M read from disk, 165.9M written to disk. Oct 31 13:50:48.698635 containerd[1578]: time="2025-10-31T13:50:48.698589642Z" level=info msg="received exit event container_id:\"64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb\" id:\"64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb\" pid:3485 exited_at:{seconds:1761918648 nanos:698411511}" Oct 31 13:50:48.698793 containerd[1578]: time="2025-10-31T13:50:48.698758731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb\" id:\"64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb\" pid:3485 exited_at:{seconds:1761918648 nanos:698411511}" Oct 31 13:50:48.716454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64cd14b56ef173f35cba8a5febc122a230cee3610a5731b20a1988959ab50bbb-rootfs.mount: Deactivated successfully. Oct 31 13:50:48.739985 kubelet[2712]: I1031 13:50:48.739949 2712 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 31 13:50:48.889904 systemd[1]: Created slice kubepods-burstable-pod6f0df04e_9828_4f74_a5ef_4e403b9cca2d.slice - libcontainer container kubepods-burstable-pod6f0df04e_9828_4f74_a5ef_4e403b9cca2d.slice. Oct 31 13:50:48.898448 systemd[1]: Created slice kubepods-besteffort-poddcd2d84c_2d0d_4ab4_85c1_df6fb5617eca.slice - libcontainer container kubepods-besteffort-poddcd2d84c_2d0d_4ab4_85c1_df6fb5617eca.slice. Oct 31 13:50:48.903237 systemd[1]: Created slice kubepods-besteffort-pod341ea4b7_59e9_45df_9dd6_88324f67c306.slice - libcontainer container kubepods-besteffort-pod341ea4b7_59e9_45df_9dd6_88324f67c306.slice. Oct 31 13:50:48.908306 systemd[1]: Created slice kubepods-burstable-pod8be1d4da_a7a8_4577_85a3_7fd88fd553c4.slice - libcontainer container kubepods-burstable-pod8be1d4da_a7a8_4577_85a3_7fd88fd553c4.slice. Oct 31 13:50:48.913754 systemd[1]: Created slice kubepods-besteffort-podc599a8a3_4205_4561_8130_ab9955590d60.slice - libcontainer container kubepods-besteffort-podc599a8a3_4205_4561_8130_ab9955590d60.slice. Oct 31 13:50:48.914488 kubelet[2712]: I1031 13:50:48.914398 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c599a8a3-4205-4561-8130-ab9955590d60-whisker-ca-bundle\") pod \"whisker-84c744bbfb-cxkw4\" (UID: \"c599a8a3-4205-4561-8130-ab9955590d60\") " pod="calico-system/whisker-84c744bbfb-cxkw4" Oct 31 13:50:48.914488 kubelet[2712]: I1031 13:50:48.914436 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lttqx\" (UniqueName: \"kubernetes.io/projected/8be1d4da-a7a8-4577-85a3-7fd88fd553c4-kube-api-access-lttqx\") pod \"coredns-66bc5c9577-kwx75\" (UID: \"8be1d4da-a7a8-4577-85a3-7fd88fd553c4\") " pod="kube-system/coredns-66bc5c9577-kwx75" Oct 31 13:50:48.914488 kubelet[2712]: I1031 13:50:48.914453 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353-config\") pod \"goldmane-7c778bb748-ddp7z\" (UID: \"2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353\") " pod="calico-system/goldmane-7c778bb748-ddp7z" Oct 31 13:50:48.914637 kubelet[2712]: I1031 13:50:48.914509 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f0df04e-9828-4f74-a5ef-4e403b9cca2d-config-volume\") pod \"coredns-66bc5c9577-k556c\" (UID: \"6f0df04e-9828-4f74-a5ef-4e403b9cca2d\") " pod="kube-system/coredns-66bc5c9577-k556c" Oct 31 13:50:48.914637 kubelet[2712]: I1031 13:50:48.914545 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/341ea4b7-59e9-45df-9dd6-88324f67c306-calico-apiserver-certs\") pod \"calico-apiserver-585cc8fbcc-v9wdq\" (UID: \"341ea4b7-59e9-45df-9dd6-88324f67c306\") " pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" Oct 31 13:50:48.914637 kubelet[2712]: I1031 13:50:48.914579 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2mtd\" (UniqueName: \"kubernetes.io/projected/c599a8a3-4205-4561-8130-ab9955590d60-kube-api-access-c2mtd\") pod \"whisker-84c744bbfb-cxkw4\" (UID: \"c599a8a3-4205-4561-8130-ab9955590d60\") " pod="calico-system/whisker-84c744bbfb-cxkw4" Oct 31 13:50:48.914637 kubelet[2712]: I1031 13:50:48.914594 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-ddp7z\" (UID: \"2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353\") " pod="calico-system/goldmane-7c778bb748-ddp7z" Oct 31 13:50:48.914637 kubelet[2712]: I1031 13:50:48.914622 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp67d\" (UniqueName: \"kubernetes.io/projected/fc3da513-331e-433f-b59a-3df653173d16-kube-api-access-cp67d\") pod \"calico-kube-controllers-54d64b9b44-7kmqj\" (UID: \"fc3da513-331e-433f-b59a-3df653173d16\") " pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" Oct 31 13:50:48.915079 kubelet[2712]: I1031 13:50:48.914651 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353-goldmane-key-pair\") pod \"goldmane-7c778bb748-ddp7z\" (UID: \"2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353\") " pod="calico-system/goldmane-7c778bb748-ddp7z" Oct 31 13:50:48.915079 kubelet[2712]: I1031 13:50:48.914668 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82kfw\" (UniqueName: \"kubernetes.io/projected/2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353-kube-api-access-82kfw\") pod \"goldmane-7c778bb748-ddp7z\" (UID: \"2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353\") " pod="calico-system/goldmane-7c778bb748-ddp7z" Oct 31 13:50:48.915079 kubelet[2712]: I1031 13:50:48.914696 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpf5t\" (UniqueName: \"kubernetes.io/projected/6f0df04e-9828-4f74-a5ef-4e403b9cca2d-kube-api-access-vpf5t\") pod \"coredns-66bc5c9577-k556c\" (UID: \"6f0df04e-9828-4f74-a5ef-4e403b9cca2d\") " pod="kube-system/coredns-66bc5c9577-k556c" Oct 31 13:50:48.915079 kubelet[2712]: I1031 13:50:48.914713 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc3da513-331e-433f-b59a-3df653173d16-tigera-ca-bundle\") pod \"calico-kube-controllers-54d64b9b44-7kmqj\" (UID: \"fc3da513-331e-433f-b59a-3df653173d16\") " pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" Oct 31 13:50:48.915079 kubelet[2712]: I1031 13:50:48.914741 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktsl\" (UniqueName: \"kubernetes.io/projected/341ea4b7-59e9-45df-9dd6-88324f67c306-kube-api-access-hktsl\") pod \"calico-apiserver-585cc8fbcc-v9wdq\" (UID: \"341ea4b7-59e9-45df-9dd6-88324f67c306\") " pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" Oct 31 13:50:48.915372 kubelet[2712]: I1031 13:50:48.914797 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xmxf\" (UniqueName: \"kubernetes.io/projected/dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca-kube-api-access-4xmxf\") pod \"calico-apiserver-585cc8fbcc-rdm7t\" (UID: \"dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca\") " pod="calico-apiserver/calico-apiserver-585cc8fbcc-rdm7t" Oct 31 13:50:48.915372 kubelet[2712]: I1031 13:50:48.914836 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c599a8a3-4205-4561-8130-ab9955590d60-whisker-backend-key-pair\") pod \"whisker-84c744bbfb-cxkw4\" (UID: \"c599a8a3-4205-4561-8130-ab9955590d60\") " pod="calico-system/whisker-84c744bbfb-cxkw4" Oct 31 13:50:48.915372 kubelet[2712]: I1031 13:50:48.914886 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8be1d4da-a7a8-4577-85a3-7fd88fd553c4-config-volume\") pod \"coredns-66bc5c9577-kwx75\" (UID: \"8be1d4da-a7a8-4577-85a3-7fd88fd553c4\") " pod="kube-system/coredns-66bc5c9577-kwx75" Oct 31 13:50:48.915372 kubelet[2712]: I1031 13:50:48.914921 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca-calico-apiserver-certs\") pod \"calico-apiserver-585cc8fbcc-rdm7t\" (UID: \"dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca\") " pod="calico-apiserver/calico-apiserver-585cc8fbcc-rdm7t" Oct 31 13:50:48.920060 systemd[1]: Created slice kubepods-besteffort-podfc3da513_331e_433f_b59a_3df653173d16.slice - libcontainer container kubepods-besteffort-podfc3da513_331e_433f_b59a_3df653173d16.slice. Oct 31 13:50:48.924807 systemd[1]: Created slice kubepods-besteffort-pod2bd4fb9f_2b11_4c32_9aaf_f7e5c672a353.slice - libcontainer container kubepods-besteffort-pod2bd4fb9f_2b11_4c32_9aaf_f7e5c672a353.slice. Oct 31 13:50:49.195900 kubelet[2712]: E1031 13:50:49.195860 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:49.197865 containerd[1578]: time="2025-10-31T13:50:49.197828416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k556c,Uid:6f0df04e-9828-4f74-a5ef-4e403b9cca2d,Namespace:kube-system,Attempt:0,}" Oct 31 13:50:49.205510 containerd[1578]: time="2025-10-31T13:50:49.205270202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-585cc8fbcc-rdm7t,Uid:dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca,Namespace:calico-apiserver,Attempt:0,}" Oct 31 13:50:49.213643 kubelet[2712]: E1031 13:50:49.213511 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:49.222936 containerd[1578]: time="2025-10-31T13:50:49.222895734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-585cc8fbcc-v9wdq,Uid:341ea4b7-59e9-45df-9dd6-88324f67c306,Namespace:calico-apiserver,Attempt:0,}" Oct 31 13:50:49.223881 containerd[1578]: time="2025-10-31T13:50:49.223019568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c744bbfb-cxkw4,Uid:c599a8a3-4205-4561-8130-ab9955590d60,Namespace:calico-system,Attempt:0,}" Oct 31 13:50:49.223881 containerd[1578]: time="2025-10-31T13:50:49.223060540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwx75,Uid:8be1d4da-a7a8-4577-85a3-7fd88fd553c4,Namespace:kube-system,Attempt:0,}" Oct 31 13:50:49.224728 containerd[1578]: time="2025-10-31T13:50:49.224699995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d64b9b44-7kmqj,Uid:fc3da513-331e-433f-b59a-3df653173d16,Namespace:calico-system,Attempt:0,}" Oct 31 13:50:49.232763 containerd[1578]: time="2025-10-31T13:50:49.232728663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-ddp7z,Uid:2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353,Namespace:calico-system,Attempt:0,}" Oct 31 13:50:49.325444 containerd[1578]: time="2025-10-31T13:50:49.325391425Z" level=error msg="Failed to destroy network for sandbox \"2df3380c20faa39f136bb57ebca4231e44be0a9d0f6a87b530b1d1e622e8d3a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.327267 containerd[1578]: time="2025-10-31T13:50:49.326714192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k556c,Uid:6f0df04e-9828-4f74-a5ef-4e403b9cca2d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df3380c20faa39f136bb57ebca4231e44be0a9d0f6a87b530b1d1e622e8d3a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.327542 kubelet[2712]: E1031 13:50:49.327453 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df3380c20faa39f136bb57ebca4231e44be0a9d0f6a87b530b1d1e622e8d3a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.327542 kubelet[2712]: E1031 13:50:49.327512 2712 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df3380c20faa39f136bb57ebca4231e44be0a9d0f6a87b530b1d1e622e8d3a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k556c" Oct 31 13:50:49.328187 kubelet[2712]: E1031 13:50:49.327529 2712 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df3380c20faa39f136bb57ebca4231e44be0a9d0f6a87b530b1d1e622e8d3a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k556c" Oct 31 13:50:49.328187 kubelet[2712]: E1031 13:50:49.327692 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-k556c_kube-system(6f0df04e-9828-4f74-a5ef-4e403b9cca2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-k556c_kube-system(6f0df04e-9828-4f74-a5ef-4e403b9cca2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2df3380c20faa39f136bb57ebca4231e44be0a9d0f6a87b530b1d1e622e8d3a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k556c" podUID="6f0df04e-9828-4f74-a5ef-4e403b9cca2d" Oct 31 13:50:49.340573 containerd[1578]: time="2025-10-31T13:50:49.340514582Z" level=error msg="Failed to destroy network for sandbox \"31c5b110324a1021b2cc03fea0c10d62593c386e881ec47a6ffb98ce8d9cff44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.342180 containerd[1578]: time="2025-10-31T13:50:49.342134712Z" level=error msg="Failed to destroy network for sandbox \"eed76eebb70016ee297d8c1737265985ad2f12400d96d3408839dc78043e8910\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.343233 containerd[1578]: time="2025-10-31T13:50:49.342625328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-585cc8fbcc-rdm7t,Uid:dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c5b110324a1021b2cc03fea0c10d62593c386e881ec47a6ffb98ce8d9cff44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.344579 kubelet[2712]: E1031 13:50:49.342867 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c5b110324a1021b2cc03fea0c10d62593c386e881ec47a6ffb98ce8d9cff44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.344579 kubelet[2712]: E1031 13:50:49.342920 2712 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c5b110324a1021b2cc03fea0c10d62593c386e881ec47a6ffb98ce8d9cff44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-585cc8fbcc-rdm7t" Oct 31 13:50:49.344579 kubelet[2712]: E1031 13:50:49.342938 2712 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c5b110324a1021b2cc03fea0c10d62593c386e881ec47a6ffb98ce8d9cff44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-585cc8fbcc-rdm7t" Oct 31 13:50:49.344714 containerd[1578]: time="2025-10-31T13:50:49.343634969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-585cc8fbcc-v9wdq,Uid:341ea4b7-59e9-45df-9dd6-88324f67c306,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed76eebb70016ee297d8c1737265985ad2f12400d96d3408839dc78043e8910\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.344759 kubelet[2712]: E1031 13:50:49.342994 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-585cc8fbcc-rdm7t_calico-apiserver(dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-585cc8fbcc-rdm7t_calico-apiserver(dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c5b110324a1021b2cc03fea0c10d62593c386e881ec47a6ffb98ce8d9cff44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-rdm7t" podUID="dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca" Oct 31 13:50:49.344759 kubelet[2712]: E1031 13:50:49.344295 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed76eebb70016ee297d8c1737265985ad2f12400d96d3408839dc78043e8910\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.344759 kubelet[2712]: E1031 13:50:49.344378 2712 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed76eebb70016ee297d8c1737265985ad2f12400d96d3408839dc78043e8910\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" Oct 31 13:50:49.344842 kubelet[2712]: E1031 13:50:49.344396 2712 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed76eebb70016ee297d8c1737265985ad2f12400d96d3408839dc78043e8910\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" Oct 31 13:50:49.344842 kubelet[2712]: E1031 13:50:49.344536 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-585cc8fbcc-v9wdq_calico-apiserver(341ea4b7-59e9-45df-9dd6-88324f67c306)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-585cc8fbcc-v9wdq_calico-apiserver(341ea4b7-59e9-45df-9dd6-88324f67c306)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eed76eebb70016ee297d8c1737265985ad2f12400d96d3408839dc78043e8910\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" podUID="341ea4b7-59e9-45df-9dd6-88324f67c306" Oct 31 13:50:49.344926 containerd[1578]: time="2025-10-31T13:50:49.344893478Z" level=error msg="Failed to destroy network for sandbox \"209a18fbf50ef711ce88d17ba54eab0a0aa9b02364a92a44ab6ac2e8a5e2f8fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.346074 containerd[1578]: time="2025-10-31T13:50:49.346028913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-ddp7z,Uid:2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"209a18fbf50ef711ce88d17ba54eab0a0aa9b02364a92a44ab6ac2e8a5e2f8fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.346436 kubelet[2712]: E1031 13:50:49.346382 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"209a18fbf50ef711ce88d17ba54eab0a0aa9b02364a92a44ab6ac2e8a5e2f8fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.346503 kubelet[2712]: E1031 13:50:49.346447 2712 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"209a18fbf50ef711ce88d17ba54eab0a0aa9b02364a92a44ab6ac2e8a5e2f8fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-ddp7z" Oct 31 13:50:49.346503 kubelet[2712]: E1031 13:50:49.346463 2712 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"209a18fbf50ef711ce88d17ba54eab0a0aa9b02364a92a44ab6ac2e8a5e2f8fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-ddp7z" Oct 31 13:50:49.346816 kubelet[2712]: E1031 13:50:49.346554 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-ddp7z_calico-system(2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-ddp7z_calico-system(2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"209a18fbf50ef711ce88d17ba54eab0a0aa9b02364a92a44ab6ac2e8a5e2f8fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-ddp7z" podUID="2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353" Oct 31 13:50:49.355436 containerd[1578]: time="2025-10-31T13:50:49.355395393Z" level=error msg="Failed to destroy network for sandbox \"69084f598f17efe9f5885296490d24d5e413775e2a667cf4571e5bf95a44ced7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.356521 containerd[1578]: time="2025-10-31T13:50:49.356478894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwx75,Uid:8be1d4da-a7a8-4577-85a3-7fd88fd553c4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69084f598f17efe9f5885296490d24d5e413775e2a667cf4571e5bf95a44ced7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.357065 kubelet[2712]: E1031 13:50:49.357015 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69084f598f17efe9f5885296490d24d5e413775e2a667cf4571e5bf95a44ced7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.357129 kubelet[2712]: E1031 13:50:49.357075 2712 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69084f598f17efe9f5885296490d24d5e413775e2a667cf4571e5bf95a44ced7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kwx75" Oct 31 13:50:49.357129 kubelet[2712]: E1031 13:50:49.357095 2712 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69084f598f17efe9f5885296490d24d5e413775e2a667cf4571e5bf95a44ced7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kwx75" Oct 31 13:50:49.357194 kubelet[2712]: E1031 13:50:49.357135 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kwx75_kube-system(8be1d4da-a7a8-4577-85a3-7fd88fd553c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kwx75_kube-system(8be1d4da-a7a8-4577-85a3-7fd88fd553c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69084f598f17efe9f5885296490d24d5e413775e2a667cf4571e5bf95a44ced7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kwx75" podUID="8be1d4da-a7a8-4577-85a3-7fd88fd553c4" Oct 31 13:50:49.366294 containerd[1578]: time="2025-10-31T13:50:49.365932798Z" level=error msg="Failed to destroy network for sandbox \"da53c0d74267fd4a6c8e8cb962ece166b63b45c27567bcb863ba153f32df244c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.366931 containerd[1578]: time="2025-10-31T13:50:49.366884502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c744bbfb-cxkw4,Uid:c599a8a3-4205-4561-8130-ab9955590d60,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da53c0d74267fd4a6c8e8cb962ece166b63b45c27567bcb863ba153f32df244c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.367141 kubelet[2712]: E1031 13:50:49.367095 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da53c0d74267fd4a6c8e8cb962ece166b63b45c27567bcb863ba153f32df244c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.367197 kubelet[2712]: E1031 13:50:49.367165 2712 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da53c0d74267fd4a6c8e8cb962ece166b63b45c27567bcb863ba153f32df244c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c744bbfb-cxkw4" Oct 31 13:50:49.367197 kubelet[2712]: E1031 13:50:49.367183 2712 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da53c0d74267fd4a6c8e8cb962ece166b63b45c27567bcb863ba153f32df244c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84c744bbfb-cxkw4" Oct 31 13:50:49.367264 kubelet[2712]: E1031 13:50:49.367236 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84c744bbfb-cxkw4_calico-system(c599a8a3-4205-4561-8130-ab9955590d60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84c744bbfb-cxkw4_calico-system(c599a8a3-4205-4561-8130-ab9955590d60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da53c0d74267fd4a6c8e8cb962ece166b63b45c27567bcb863ba153f32df244c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84c744bbfb-cxkw4" podUID="c599a8a3-4205-4561-8130-ab9955590d60" Oct 31 13:50:49.374863 containerd[1578]: time="2025-10-31T13:50:49.374820945Z" level=error msg="Failed to destroy network for sandbox \"3112d8a35952adb281d217c962c770036b8bcbe0e5d26b6d1a244b37bf9bb6de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.375989 containerd[1578]: time="2025-10-31T13:50:49.375954380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d64b9b44-7kmqj,Uid:fc3da513-331e-433f-b59a-3df653173d16,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3112d8a35952adb281d217c962c770036b8bcbe0e5d26b6d1a244b37bf9bb6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.376215 kubelet[2712]: E1031 13:50:49.376169 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3112d8a35952adb281d217c962c770036b8bcbe0e5d26b6d1a244b37bf9bb6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:49.376262 kubelet[2712]: E1031 13:50:49.376232 2712 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3112d8a35952adb281d217c962c770036b8bcbe0e5d26b6d1a244b37bf9bb6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" Oct 31 13:50:49.376305 kubelet[2712]: E1031 13:50:49.376255 2712 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3112d8a35952adb281d217c962c770036b8bcbe0e5d26b6d1a244b37bf9bb6de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" Oct 31 13:50:49.376354 kubelet[2712]: E1031 13:50:49.376324 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54d64b9b44-7kmqj_calico-system(fc3da513-331e-433f-b59a-3df653173d16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54d64b9b44-7kmqj_calico-system(fc3da513-331e-433f-b59a-3df653173d16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3112d8a35952adb281d217c962c770036b8bcbe0e5d26b6d1a244b37bf9bb6de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" podUID="fc3da513-331e-433f-b59a-3df653173d16" Oct 31 13:50:49.406182 kubelet[2712]: E1031 13:50:49.406154 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:49.406988 containerd[1578]: time="2025-10-31T13:50:49.406961587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 13:50:50.129734 systemd[1]: run-netns-cni\x2d5df51c38\x2dea47\x2db931\x2d44b8\x2df778596ead64.mount: Deactivated successfully. Oct 31 13:50:50.129829 systemd[1]: run-netns-cni\x2d9328edd6\x2d881a\x2da880\x2d9cfd\x2d33208f31451e.mount: Deactivated successfully. Oct 31 13:50:50.129888 systemd[1]: run-netns-cni\x2d395cdc48\x2d40d2\x2d63fb\x2d1454\x2dad6ab02da81c.mount: Deactivated successfully. Oct 31 13:50:50.129931 systemd[1]: run-netns-cni\x2d7d09a0f0\x2dfe08\x2d93b7\x2dafb6\x2d493c13018b12.mount: Deactivated successfully. Oct 31 13:50:50.129973 systemd[1]: run-netns-cni\x2d21b759b0\x2d3e96\x2d02c8\x2d5c36\x2d5322632808de.mount: Deactivated successfully. Oct 31 13:50:50.318795 systemd[1]: Created slice kubepods-besteffort-podfda5ab0e_82e2_4b7d_827a_809d2fbca767.slice - libcontainer container kubepods-besteffort-podfda5ab0e_82e2_4b7d_827a_809d2fbca767.slice. Oct 31 13:50:50.321974 containerd[1578]: time="2025-10-31T13:50:50.321932764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msghp,Uid:fda5ab0e-82e2-4b7d-827a-809d2fbca767,Namespace:calico-system,Attempt:0,}" Oct 31 13:50:50.367660 containerd[1578]: time="2025-10-31T13:50:50.367549617Z" level=error msg="Failed to destroy network for sandbox \"a7ae75c7915b1444b673f4baac761067f8bdaea548aa6563dc957d31d957be89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:50.369182 systemd[1]: run-netns-cni\x2de96c41b0\x2d03a5\x2d7552\x2d20d4\x2d8f1346648c6d.mount: Deactivated successfully. Oct 31 13:50:50.371197 containerd[1578]: time="2025-10-31T13:50:50.371159264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msghp,Uid:fda5ab0e-82e2-4b7d-827a-809d2fbca767,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ae75c7915b1444b673f4baac761067f8bdaea548aa6563dc957d31d957be89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:50.371944 kubelet[2712]: E1031 13:50:50.371567 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ae75c7915b1444b673f4baac761067f8bdaea548aa6563dc957d31d957be89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 13:50:50.371944 kubelet[2712]: E1031 13:50:50.371625 2712 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ae75c7915b1444b673f4baac761067f8bdaea548aa6563dc957d31d957be89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-msghp" Oct 31 13:50:50.371944 kubelet[2712]: E1031 13:50:50.371643 2712 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ae75c7915b1444b673f4baac761067f8bdaea548aa6563dc957d31d957be89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-msghp" Oct 31 13:50:50.372259 kubelet[2712]: E1031 13:50:50.371696 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-msghp_calico-system(fda5ab0e-82e2-4b7d-827a-809d2fbca767)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-msghp_calico-system(fda5ab0e-82e2-4b7d-827a-809d2fbca767)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7ae75c7915b1444b673f4baac761067f8bdaea548aa6563dc957d31d957be89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:50:53.290088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067733603.mount: Deactivated successfully. Oct 31 13:50:53.548930 containerd[1578]: time="2025-10-31T13:50:53.548812077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:53.549425 containerd[1578]: time="2025-10-31T13:50:53.549369932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 31 13:50:53.553051 containerd[1578]: time="2025-10-31T13:50:53.553009612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.146003213s" Oct 31 13:50:53.553051 containerd[1578]: time="2025-10-31T13:50:53.553048901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 31 13:50:53.568153 containerd[1578]: time="2025-10-31T13:50:53.568098739Z" level=info msg="CreateContainer within sandbox \"9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 13:50:53.575268 containerd[1578]: time="2025-10-31T13:50:53.575183732Z" level=info msg="Container 72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:50:53.580377 containerd[1578]: time="2025-10-31T13:50:53.580241715Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:53.581160 containerd[1578]: time="2025-10-31T13:50:53.581108444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 13:50:53.584516 containerd[1578]: time="2025-10-31T13:50:53.584476658Z" level=info msg="CreateContainer within sandbox \"9b066bc4311333d8911db7fa5615ef1f81e1406656f99d043b2511aa8bbacf19\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2\"" Oct 31 13:50:53.585225 containerd[1578]: time="2025-10-31T13:50:53.585200593Z" level=info msg="StartContainer for \"72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2\"" Oct 31 13:50:53.586592 containerd[1578]: time="2025-10-31T13:50:53.586554681Z" level=info msg="connecting to shim 72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2" address="unix:///run/containerd/s/4abae51999d13dc633525e3624754c6be1a33c9668481e0378b859e4fbf92171" protocol=ttrpc version=3 Oct 31 13:50:53.609456 systemd[1]: Started cri-containerd-72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2.scope - libcontainer container 72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2. Oct 31 13:50:53.644404 containerd[1578]: time="2025-10-31T13:50:53.644366536Z" level=info msg="StartContainer for \"72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2\" returns successfully" Oct 31 13:50:53.759915 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 13:50:53.760015 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 13:50:53.950823 kubelet[2712]: I1031 13:50:53.950737 2712 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2mtd\" (UniqueName: \"kubernetes.io/projected/c599a8a3-4205-4561-8130-ab9955590d60-kube-api-access-c2mtd\") pod \"c599a8a3-4205-4561-8130-ab9955590d60\" (UID: \"c599a8a3-4205-4561-8130-ab9955590d60\") " Oct 31 13:50:53.950823 kubelet[2712]: I1031 13:50:53.950809 2712 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c599a8a3-4205-4561-8130-ab9955590d60-whisker-ca-bundle\") pod \"c599a8a3-4205-4561-8130-ab9955590d60\" (UID: \"c599a8a3-4205-4561-8130-ab9955590d60\") " Oct 31 13:50:53.950823 kubelet[2712]: I1031 13:50:53.950835 2712 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c599a8a3-4205-4561-8130-ab9955590d60-whisker-backend-key-pair\") pod \"c599a8a3-4205-4561-8130-ab9955590d60\" (UID: \"c599a8a3-4205-4561-8130-ab9955590d60\") " Oct 31 13:50:53.957639 kubelet[2712]: I1031 13:50:53.957518 2712 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c599a8a3-4205-4561-8130-ab9955590d60-kube-api-access-c2mtd" (OuterVolumeSpecName: "kube-api-access-c2mtd") pod "c599a8a3-4205-4561-8130-ab9955590d60" (UID: "c599a8a3-4205-4561-8130-ab9955590d60"). InnerVolumeSpecName "kube-api-access-c2mtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 13:50:53.959381 kubelet[2712]: I1031 13:50:53.959336 2712 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c599a8a3-4205-4561-8130-ab9955590d60-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c599a8a3-4205-4561-8130-ab9955590d60" (UID: "c599a8a3-4205-4561-8130-ab9955590d60"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 13:50:53.963497 kubelet[2712]: I1031 13:50:53.963465 2712 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c599a8a3-4205-4561-8130-ab9955590d60-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c599a8a3-4205-4561-8130-ab9955590d60" (UID: "c599a8a3-4205-4561-8130-ab9955590d60"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 13:50:54.051739 kubelet[2712]: I1031 13:50:54.051676 2712 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c599a8a3-4205-4561-8130-ab9955590d60-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 13:50:54.051739 kubelet[2712]: I1031 13:50:54.051708 2712 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c2mtd\" (UniqueName: \"kubernetes.io/projected/c599a8a3-4205-4561-8130-ab9955590d60-kube-api-access-c2mtd\") on node \"localhost\" DevicePath \"\"" Oct 31 13:50:54.051739 kubelet[2712]: I1031 13:50:54.051720 2712 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c599a8a3-4205-4561-8130-ab9955590d60-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 13:50:54.288717 systemd[1]: var-lib-kubelet-pods-c599a8a3\x2d4205\x2d4561\x2d8130\x2dab9955590d60-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc2mtd.mount: Deactivated successfully. Oct 31 13:50:54.288812 systemd[1]: var-lib-kubelet-pods-c599a8a3\x2d4205\x2d4561\x2d8130\x2dab9955590d60-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 13:50:54.423362 kubelet[2712]: E1031 13:50:54.422330 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:54.432820 systemd[1]: Removed slice kubepods-besteffort-podc599a8a3_4205_4561_8130_ab9955590d60.slice - libcontainer container kubepods-besteffort-podc599a8a3_4205_4561_8130_ab9955590d60.slice. Oct 31 13:50:54.438166 kubelet[2712]: I1031 13:50:54.438035 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r7c5h" podStartSLOduration=2.145543034 podStartE2EDuration="14.438019581s" podCreationTimestamp="2025-10-31 13:50:40 +0000 UTC" firstStartedPulling="2025-10-31 13:50:41.261203707 +0000 UTC m=+24.038583982" lastFinishedPulling="2025-10-31 13:50:53.553680254 +0000 UTC m=+36.331060529" observedRunningTime="2025-10-31 13:50:54.43780197 +0000 UTC m=+37.215182325" watchObservedRunningTime="2025-10-31 13:50:54.438019581 +0000 UTC m=+37.215399856" Oct 31 13:50:54.502097 systemd[1]: Created slice kubepods-besteffort-pod87d6c415_1e44_453b_af0a_7b2c40a8254b.slice - libcontainer container kubepods-besteffort-pod87d6c415_1e44_453b_af0a_7b2c40a8254b.slice. Oct 31 13:50:54.556093 kubelet[2712]: I1031 13:50:54.555977 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87d6c415-1e44-453b-af0a-7b2c40a8254b-whisker-backend-key-pair\") pod \"whisker-69d9d786b6-2rbvr\" (UID: \"87d6c415-1e44-453b-af0a-7b2c40a8254b\") " pod="calico-system/whisker-69d9d786b6-2rbvr" Oct 31 13:50:54.556093 kubelet[2712]: I1031 13:50:54.556023 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgzp9\" (UniqueName: \"kubernetes.io/projected/87d6c415-1e44-453b-af0a-7b2c40a8254b-kube-api-access-wgzp9\") pod \"whisker-69d9d786b6-2rbvr\" (UID: \"87d6c415-1e44-453b-af0a-7b2c40a8254b\") " pod="calico-system/whisker-69d9d786b6-2rbvr" Oct 31 13:50:54.556093 kubelet[2712]: I1031 13:50:54.556076 2712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87d6c415-1e44-453b-af0a-7b2c40a8254b-whisker-ca-bundle\") pod \"whisker-69d9d786b6-2rbvr\" (UID: \"87d6c415-1e44-453b-af0a-7b2c40a8254b\") " pod="calico-system/whisker-69d9d786b6-2rbvr" Oct 31 13:50:54.585523 containerd[1578]: time="2025-10-31T13:50:54.585478705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2\" id:\"9cfec2ec1369e48d7f10dd59936aa1ff8c2fb9273eac6c1b60f679962720f5cc\" pid:3876 exit_status:1 exited_at:{seconds:1761918654 nanos:585181836}" Oct 31 13:50:54.811300 containerd[1578]: time="2025-10-31T13:50:54.810621898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69d9d786b6-2rbvr,Uid:87d6c415-1e44-453b-af0a-7b2c40a8254b,Namespace:calico-system,Attempt:0,}" Oct 31 13:50:54.997935 systemd-networkd[1491]: caliac764a11f2b: Link UP Oct 31 13:50:54.998318 systemd-networkd[1491]: caliac764a11f2b: Gained carrier Oct 31 13:50:55.010475 containerd[1578]: 2025-10-31 13:50:54.834 [INFO][3890] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 13:50:55.010475 containerd[1578]: 2025-10-31 13:50:54.876 [INFO][3890] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--69d9d786b6--2rbvr-eth0 whisker-69d9d786b6- calico-system 87d6c415-1e44-453b-af0a-7b2c40a8254b 905 0 2025-10-31 13:50:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69d9d786b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-69d9d786b6-2rbvr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliac764a11f2b [] [] }} ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Namespace="calico-system" Pod="whisker-69d9d786b6-2rbvr" WorkloadEndpoint="localhost-k8s-whisker--69d9d786b6--2rbvr-" Oct 31 13:50:55.010475 containerd[1578]: 2025-10-31 13:50:54.876 [INFO][3890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Namespace="calico-system" Pod="whisker-69d9d786b6-2rbvr" WorkloadEndpoint="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" Oct 31 13:50:55.010475 containerd[1578]: 2025-10-31 13:50:54.937 [INFO][3905] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" HandleID="k8s-pod-network.69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Workload="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.937 [INFO][3905] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" HandleID="k8s-pod-network.69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Workload="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-69d9d786b6-2rbvr", "timestamp":"2025-10-31 13:50:54.937519448 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.937 [INFO][3905] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.937 [INFO][3905] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.937 [INFO][3905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.949 [INFO][3905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" host="localhost" Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.955 [INFO][3905] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.959 [INFO][3905] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.961 [INFO][3905] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.963 [INFO][3905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:50:55.010702 containerd[1578]: 2025-10-31 13:50:54.963 [INFO][3905] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" host="localhost" Oct 31 13:50:55.010898 containerd[1578]: 2025-10-31 13:50:54.964 [INFO][3905] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2 Oct 31 13:50:55.010898 containerd[1578]: 2025-10-31 13:50:54.974 [INFO][3905] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" host="localhost" Oct 31 13:50:55.010898 containerd[1578]: 2025-10-31 13:50:54.989 [INFO][3905] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" host="localhost" Oct 31 13:50:55.010898 containerd[1578]: 2025-10-31 13:50:54.989 [INFO][3905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" host="localhost" Oct 31 13:50:55.010898 containerd[1578]: 2025-10-31 13:50:54.989 [INFO][3905] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:50:55.010898 containerd[1578]: 2025-10-31 13:50:54.989 [INFO][3905] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" HandleID="k8s-pod-network.69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Workload="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" Oct 31 13:50:55.011003 containerd[1578]: 2025-10-31 13:50:54.992 [INFO][3890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Namespace="calico-system" Pod="whisker-69d9d786b6-2rbvr" WorkloadEndpoint="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69d9d786b6--2rbvr-eth0", GenerateName:"whisker-69d9d786b6-", Namespace:"calico-system", SelfLink:"", UID:"87d6c415-1e44-453b-af0a-7b2c40a8254b", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69d9d786b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-69d9d786b6-2rbvr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliac764a11f2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:50:55.011003 containerd[1578]: 2025-10-31 13:50:54.992 [INFO][3890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Namespace="calico-system" Pod="whisker-69d9d786b6-2rbvr" WorkloadEndpoint="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" Oct 31 13:50:55.011086 containerd[1578]: 2025-10-31 13:50:54.992 [INFO][3890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac764a11f2b ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Namespace="calico-system" Pod="whisker-69d9d786b6-2rbvr" WorkloadEndpoint="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" Oct 31 13:50:55.011086 containerd[1578]: 2025-10-31 13:50:54.998 [INFO][3890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Namespace="calico-system" Pod="whisker-69d9d786b6-2rbvr" WorkloadEndpoint="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" Oct 31 13:50:55.011125 containerd[1578]: 2025-10-31 13:50:54.999 [INFO][3890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Namespace="calico-system" Pod="whisker-69d9d786b6-2rbvr" WorkloadEndpoint="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69d9d786b6--2rbvr-eth0", GenerateName:"whisker-69d9d786b6-", Namespace:"calico-system", SelfLink:"", UID:"87d6c415-1e44-453b-af0a-7b2c40a8254b", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69d9d786b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2", Pod:"whisker-69d9d786b6-2rbvr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliac764a11f2b", MAC:"ca:7b:28:08:f3:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:50:55.011181 containerd[1578]: 2025-10-31 13:50:55.008 [INFO][3890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" Namespace="calico-system" Pod="whisker-69d9d786b6-2rbvr" WorkloadEndpoint="localhost-k8s-whisker--69d9d786b6--2rbvr-eth0" Oct 31 13:50:55.154984 containerd[1578]: time="2025-10-31T13:50:55.154896007Z" level=info msg="connecting to shim 69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2" address="unix:///run/containerd/s/1e81df6721957fb566d361a01b03f74205bef3e31272cea42e9fb3a9ee0f4086" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:50:55.194455 systemd[1]: Started cri-containerd-69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2.scope - libcontainer container 69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2. Oct 31 13:50:55.220194 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:50:55.293643 containerd[1578]: time="2025-10-31T13:50:55.293600013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69d9d786b6-2rbvr,Uid:87d6c415-1e44-453b-af0a-7b2c40a8254b,Namespace:calico-system,Attempt:0,} returns sandbox id \"69154c82225470b169b3d173e1c6c321b77346bf740e29362a50ca9a1b3a14e2\"" Oct 31 13:50:55.300100 containerd[1578]: time="2025-10-31T13:50:55.300053798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 13:50:55.315391 kubelet[2712]: I1031 13:50:55.315359 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c599a8a3-4205-4561-8130-ab9955590d60" path="/var/lib/kubelet/pods/c599a8a3-4205-4561-8130-ab9955590d60/volumes" Oct 31 13:50:55.434000 kubelet[2712]: E1031 13:50:55.433437 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:50:55.436823 systemd-networkd[1491]: vxlan.calico: Link UP Oct 31 13:50:55.436834 systemd-networkd[1491]: vxlan.calico: Gained carrier Oct 31 13:50:55.509727 containerd[1578]: time="2025-10-31T13:50:55.509671781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:50:55.514509 containerd[1578]: time="2025-10-31T13:50:55.514441704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 13:50:55.514580 containerd[1578]: time="2025-10-31T13:50:55.514541366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 13:50:55.514779 kubelet[2712]: E1031 13:50:55.514716 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 13:50:55.521490 kubelet[2712]: E1031 13:50:55.520236 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 13:50:55.521490 kubelet[2712]: E1031 13:50:55.520393 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-69d9d786b6-2rbvr_calico-system(87d6c415-1e44-453b-af0a-7b2c40a8254b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 13:50:55.522475 containerd[1578]: time="2025-10-31T13:50:55.522446721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 13:50:55.530524 containerd[1578]: time="2025-10-31T13:50:55.530476543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2\" id:\"ce6aa59cd2c53284772f230a18fecce8d47c5a8ee35f7385bf18e2186325d2bb\" pid:4143 exit_status:1 exited_at:{seconds:1761918655 nanos:530160272}" Oct 31 13:50:55.738761 containerd[1578]: time="2025-10-31T13:50:55.738388059Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:50:55.744459 containerd[1578]: time="2025-10-31T13:50:55.744419308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 13:50:55.744459 containerd[1578]: time="2025-10-31T13:50:55.744487084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 13:50:55.744681 kubelet[2712]: E1031 13:50:55.744634 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 13:50:55.744743 kubelet[2712]: E1031 13:50:55.744692 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 13:50:55.745062 kubelet[2712]: E1031 13:50:55.744763 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-69d9d786b6-2rbvr_calico-system(87d6c415-1e44-453b-af0a-7b2c40a8254b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 13:50:55.745062 kubelet[2712]: E1031 13:50:55.744810 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d9d786b6-2rbvr" podUID="87d6c415-1e44-453b-af0a-7b2c40a8254b" Oct 31 13:50:56.404474 systemd-networkd[1491]: caliac764a11f2b: Gained IPv6LL Oct 31 13:50:56.444591 kubelet[2712]: E1031 13:50:56.444550 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d9d786b6-2rbvr" podUID="87d6c415-1e44-453b-af0a-7b2c40a8254b" Oct 31 13:50:57.172456 systemd-networkd[1491]: vxlan.calico: Gained IPv6LL Oct 31 13:50:57.576330 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:58536.service - OpenSSH per-connection server daemon (10.0.0.1:58536). Oct 31 13:50:57.629361 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 58536 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:50:57.630793 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:50:57.635132 systemd-logind[1551]: New session 8 of user core. Oct 31 13:50:57.643497 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 13:50:57.760815 sshd[4205]: Connection closed by 10.0.0.1 port 58536 Oct 31 13:50:57.761301 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Oct 31 13:50:57.765034 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:58536.service: Deactivated successfully. Oct 31 13:50:57.766728 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 13:50:57.767347 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. Oct 31 13:50:57.768127 systemd-logind[1551]: Removed session 8. Oct 31 13:51:00.319083 containerd[1578]: time="2025-10-31T13:51:00.318820500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-585cc8fbcc-v9wdq,Uid:341ea4b7-59e9-45df-9dd6-88324f67c306,Namespace:calico-apiserver,Attempt:0,}" Oct 31 13:51:00.417026 systemd-networkd[1491]: cali6d31a05355b: Link UP Oct 31 13:51:00.417353 systemd-networkd[1491]: cali6d31a05355b: Gained carrier Oct 31 13:51:00.434208 containerd[1578]: 2025-10-31 13:51:00.352 [INFO][4220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0 calico-apiserver-585cc8fbcc- calico-apiserver 341ea4b7-59e9-45df-9dd6-88324f67c306 840 0 2025-10-31 13:50:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:585cc8fbcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-585cc8fbcc-v9wdq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6d31a05355b [] [] }} ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-v9wdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-" Oct 31 13:51:00.434208 containerd[1578]: 2025-10-31 13:51:00.352 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-v9wdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" Oct 31 13:51:00.434208 containerd[1578]: 2025-10-31 13:51:00.376 [INFO][4235] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" HandleID="k8s-pod-network.71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Workload="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.376 [INFO][4235] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" HandleID="k8s-pod-network.71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Workload="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000428080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-585cc8fbcc-v9wdq", "timestamp":"2025-10-31 13:51:00.376030399 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.376 [INFO][4235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.376 [INFO][4235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.376 [INFO][4235] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.388 [INFO][4235] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" host="localhost" Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.392 [INFO][4235] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.397 [INFO][4235] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.398 [INFO][4235] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.400 [INFO][4235] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:00.434409 containerd[1578]: 2025-10-31 13:51:00.400 [INFO][4235] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" host="localhost" Oct 31 13:51:00.434696 containerd[1578]: 2025-10-31 13:51:00.402 [INFO][4235] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276 Oct 31 13:51:00.434696 containerd[1578]: 2025-10-31 13:51:00.406 [INFO][4235] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" host="localhost" Oct 31 13:51:00.434696 containerd[1578]: 2025-10-31 13:51:00.411 [INFO][4235] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" host="localhost" Oct 31 13:51:00.434696 containerd[1578]: 2025-10-31 13:51:00.411 [INFO][4235] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" host="localhost" Oct 31 13:51:00.434696 containerd[1578]: 2025-10-31 13:51:00.411 [INFO][4235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:51:00.434696 containerd[1578]: 2025-10-31 13:51:00.411 [INFO][4235] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" HandleID="k8s-pod-network.71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Workload="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" Oct 31 13:51:00.434843 containerd[1578]: 2025-10-31 13:51:00.413 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-v9wdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0", GenerateName:"calico-apiserver-585cc8fbcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"341ea4b7-59e9-45df-9dd6-88324f67c306", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"585cc8fbcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-585cc8fbcc-v9wdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d31a05355b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:00.434897 containerd[1578]: 2025-10-31 13:51:00.413 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-v9wdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" Oct 31 13:51:00.434897 containerd[1578]: 2025-10-31 13:51:00.413 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d31a05355b ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-v9wdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" Oct 31 13:51:00.434897 containerd[1578]: 2025-10-31 13:51:00.415 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-v9wdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" Oct 31 13:51:00.434978 containerd[1578]: 2025-10-31 13:51:00.416 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-v9wdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0", GenerateName:"calico-apiserver-585cc8fbcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"341ea4b7-59e9-45df-9dd6-88324f67c306", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"585cc8fbcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276", Pod:"calico-apiserver-585cc8fbcc-v9wdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d31a05355b", MAC:"ce:7c:73:27:22:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:00.435024 containerd[1578]: 2025-10-31 13:51:00.430 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-v9wdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--v9wdq-eth0" Oct 31 13:51:00.459767 containerd[1578]: time="2025-10-31T13:51:00.459719087Z" level=info msg="connecting to shim 71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276" address="unix:///run/containerd/s/78f4c81bd1fdbbdfe96ea7318e76ab91e208f31fd9c308b3ed5784098c1126f9" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:51:00.487446 systemd[1]: Started cri-containerd-71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276.scope - libcontainer container 71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276. Oct 31 13:51:00.497881 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:51:00.518756 containerd[1578]: time="2025-10-31T13:51:00.518706297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-585cc8fbcc-v9wdq,Uid:341ea4b7-59e9-45df-9dd6-88324f67c306,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"71caba867aa5e008c75a96ddb9ee382eee6a19ebd183c7d0e54cf417f0fa0276\"" Oct 31 13:51:00.520997 containerd[1578]: time="2025-10-31T13:51:00.520786308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 13:51:00.724270 containerd[1578]: time="2025-10-31T13:51:00.724221846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:00.725139 containerd[1578]: time="2025-10-31T13:51:00.725099099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 13:51:00.725214 containerd[1578]: time="2025-10-31T13:51:00.725175314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 13:51:00.725427 kubelet[2712]: E1031 13:51:00.725371 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:51:00.725427 kubelet[2712]: E1031 13:51:00.725419 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:51:00.725741 kubelet[2712]: E1031 13:51:00.725498 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-585cc8fbcc-v9wdq_calico-apiserver(341ea4b7-59e9-45df-9dd6-88324f67c306): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:00.725741 kubelet[2712]: E1031 13:51:00.725539 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" podUID="341ea4b7-59e9-45df-9dd6-88324f67c306" Oct 31 13:51:01.315858 containerd[1578]: time="2025-10-31T13:51:01.315816651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d64b9b44-7kmqj,Uid:fc3da513-331e-433f-b59a-3df653173d16,Namespace:calico-system,Attempt:0,}" Oct 31 13:51:01.316823 containerd[1578]: time="2025-10-31T13:51:01.316785838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-585cc8fbcc-rdm7t,Uid:dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca,Namespace:calico-apiserver,Attempt:0,}" Oct 31 13:51:01.419043 systemd-networkd[1491]: caliaeb4670b158: Link UP Oct 31 13:51:01.419652 systemd-networkd[1491]: caliaeb4670b158: Gained carrier Oct 31 13:51:01.431635 containerd[1578]: 2025-10-31 13:51:01.356 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0 calico-kube-controllers-54d64b9b44- calico-system fc3da513-331e-433f-b59a-3df653173d16 839 0 2025-10-31 13:50:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54d64b9b44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54d64b9b44-7kmqj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaeb4670b158 [] [] }} ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Namespace="calico-system" Pod="calico-kube-controllers-54d64b9b44-7kmqj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-" Oct 31 13:51:01.431635 containerd[1578]: 2025-10-31 13:51:01.356 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Namespace="calico-system" Pod="calico-kube-controllers-54d64b9b44-7kmqj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" Oct 31 13:51:01.431635 containerd[1578]: 2025-10-31 13:51:01.382 [INFO][4335] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" HandleID="k8s-pod-network.64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Workload="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.382 [INFO][4335] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" HandleID="k8s-pod-network.64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Workload="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54d64b9b44-7kmqj", "timestamp":"2025-10-31 13:51:01.382454809 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.382 [INFO][4335] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.382 [INFO][4335] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.382 [INFO][4335] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.393 [INFO][4335] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" host="localhost" Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.397 [INFO][4335] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.400 [INFO][4335] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.402 [INFO][4335] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.404 [INFO][4335] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:01.431984 containerd[1578]: 2025-10-31 13:51:01.404 [INFO][4335] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" host="localhost" Oct 31 13:51:01.432174 containerd[1578]: 2025-10-31 13:51:01.405 [INFO][4335] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b Oct 31 13:51:01.432174 containerd[1578]: 2025-10-31 13:51:01.408 [INFO][4335] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" host="localhost" Oct 31 13:51:01.432174 containerd[1578]: 2025-10-31 13:51:01.414 [INFO][4335] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" host="localhost" Oct 31 13:51:01.432174 containerd[1578]: 2025-10-31 13:51:01.414 [INFO][4335] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" host="localhost" Oct 31 13:51:01.432174 containerd[1578]: 2025-10-31 13:51:01.414 [INFO][4335] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:51:01.432174 containerd[1578]: 2025-10-31 13:51:01.414 [INFO][4335] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" HandleID="k8s-pod-network.64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Workload="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" Oct 31 13:51:01.432412 containerd[1578]: 2025-10-31 13:51:01.416 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Namespace="calico-system" Pod="calico-kube-controllers-54d64b9b44-7kmqj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0", GenerateName:"calico-kube-controllers-54d64b9b44-", Namespace:"calico-system", SelfLink:"", UID:"fc3da513-331e-433f-b59a-3df653173d16", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d64b9b44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54d64b9b44-7kmqj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaeb4670b158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:01.432468 containerd[1578]: 2025-10-31 13:51:01.416 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Namespace="calico-system" Pod="calico-kube-controllers-54d64b9b44-7kmqj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" Oct 31 13:51:01.432468 containerd[1578]: 2025-10-31 13:51:01.416 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaeb4670b158 ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Namespace="calico-system" Pod="calico-kube-controllers-54d64b9b44-7kmqj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" Oct 31 13:51:01.432468 containerd[1578]: 2025-10-31 13:51:01.420 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Namespace="calico-system" Pod="calico-kube-controllers-54d64b9b44-7kmqj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" Oct 31 13:51:01.432527 containerd[1578]: 2025-10-31 13:51:01.420 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Namespace="calico-system" Pod="calico-kube-controllers-54d64b9b44-7kmqj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0", GenerateName:"calico-kube-controllers-54d64b9b44-", Namespace:"calico-system", SelfLink:"", UID:"fc3da513-331e-433f-b59a-3df653173d16", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d64b9b44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b", Pod:"calico-kube-controllers-54d64b9b44-7kmqj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaeb4670b158", MAC:"72:8b:f4:4c:c0:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:01.432575 containerd[1578]: 2025-10-31 13:51:01.428 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" Namespace="calico-system" Pod="calico-kube-controllers-54d64b9b44-7kmqj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54d64b9b44--7kmqj-eth0" Oct 31 13:51:01.448288 kubelet[2712]: E1031 13:51:01.448240 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" podUID="341ea4b7-59e9-45df-9dd6-88324f67c306" Oct 31 13:51:01.463687 containerd[1578]: time="2025-10-31T13:51:01.463313787Z" level=info msg="connecting to shim 64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b" address="unix:///run/containerd/s/800f59210d2ddc38c81dfe78b45c1f2c1fb50d6a7681284cc3d75a93e4f04581" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:51:01.488484 systemd[1]: Started cri-containerd-64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b.scope - libcontainer container 64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b. Oct 31 13:51:01.500517 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:51:01.523524 systemd-networkd[1491]: calid32eb244313: Link UP Oct 31 13:51:01.523737 systemd-networkd[1491]: calid32eb244313: Gained carrier Oct 31 13:51:01.538100 containerd[1578]: 2025-10-31 13:51:01.361 [INFO][4313] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0 calico-apiserver-585cc8fbcc- calico-apiserver dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca 837 0 2025-10-31 13:50:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:585cc8fbcc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-585cc8fbcc-rdm7t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid32eb244313 [] [] }} ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-rdm7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-" Oct 31 13:51:01.538100 containerd[1578]: 2025-10-31 13:51:01.361 [INFO][4313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-rdm7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" Oct 31 13:51:01.538100 containerd[1578]: 2025-10-31 13:51:01.386 [INFO][4341] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" HandleID="k8s-pod-network.edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Workload="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.386 [INFO][4341] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" HandleID="k8s-pod-network.edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Workload="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-585cc8fbcc-rdm7t", "timestamp":"2025-10-31 13:51:01.386426334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.386 [INFO][4341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.414 [INFO][4341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.414 [INFO][4341] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.494 [INFO][4341] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" host="localhost" Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.498 [INFO][4341] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.503 [INFO][4341] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.505 [INFO][4341] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.507 [INFO][4341] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:01.538289 containerd[1578]: 2025-10-31 13:51:01.507 [INFO][4341] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" host="localhost" Oct 31 13:51:01.538516 containerd[1578]: 2025-10-31 13:51:01.508 [INFO][4341] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc Oct 31 13:51:01.538516 containerd[1578]: 2025-10-31 13:51:01.512 [INFO][4341] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" host="localhost" Oct 31 13:51:01.538516 containerd[1578]: 2025-10-31 13:51:01.518 [INFO][4341] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" host="localhost" Oct 31 13:51:01.538516 containerd[1578]: 2025-10-31 13:51:01.518 [INFO][4341] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" host="localhost" Oct 31 13:51:01.538516 containerd[1578]: 2025-10-31 13:51:01.518 [INFO][4341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:51:01.538516 containerd[1578]: 2025-10-31 13:51:01.518 [INFO][4341] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" HandleID="k8s-pod-network.edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Workload="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" Oct 31 13:51:01.538618 containerd[1578]: 2025-10-31 13:51:01.521 [INFO][4313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-rdm7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0", GenerateName:"calico-apiserver-585cc8fbcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"585cc8fbcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-585cc8fbcc-rdm7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid32eb244313", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:01.538667 containerd[1578]: 2025-10-31 13:51:01.521 [INFO][4313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-rdm7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" Oct 31 13:51:01.538667 containerd[1578]: 2025-10-31 13:51:01.521 [INFO][4313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid32eb244313 ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-rdm7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" Oct 31 13:51:01.538667 containerd[1578]: 2025-10-31 13:51:01.523 [INFO][4313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-rdm7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" Oct 31 13:51:01.538725 containerd[1578]: 2025-10-31 13:51:01.524 [INFO][4313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-rdm7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0", GenerateName:"calico-apiserver-585cc8fbcc-", Namespace:"calico-apiserver", SelfLink:"", UID:"dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"585cc8fbcc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc", Pod:"calico-apiserver-585cc8fbcc-rdm7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid32eb244313", MAC:"4e:f2:04:8a:e8:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:01.538768 containerd[1578]: 2025-10-31 13:51:01.535 [INFO][4313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" Namespace="calico-apiserver" Pod="calico-apiserver-585cc8fbcc-rdm7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--585cc8fbcc--rdm7t-eth0" Oct 31 13:51:01.543647 containerd[1578]: time="2025-10-31T13:51:01.543593253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d64b9b44-7kmqj,Uid:fc3da513-331e-433f-b59a-3df653173d16,Namespace:calico-system,Attempt:0,} returns sandbox id \"64ca08c57dd21b442410a1a728f11cd7356c62e896ca3be4d90acf886c09099b\"" Oct 31 13:51:01.545321 containerd[1578]: time="2025-10-31T13:51:01.545251973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 13:51:01.561240 containerd[1578]: time="2025-10-31T13:51:01.560776324Z" level=info msg="connecting to shim edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc" address="unix:///run/containerd/s/c80ec8a002d07d8909e1b8604960d07c156594586d4cfdd67f8b0a9381ad011a" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:51:01.587522 systemd[1]: Started cri-containerd-edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc.scope - libcontainer container edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc. Oct 31 13:51:01.599474 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:51:01.618142 containerd[1578]: time="2025-10-31T13:51:01.618103928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-585cc8fbcc-rdm7t,Uid:dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"edcb1aa645771e896d3672d3a3176f6224b4e040117dee196fc3cee869d27ecc\"" Oct 31 13:51:01.748299 containerd[1578]: time="2025-10-31T13:51:01.748196911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:01.749411 containerd[1578]: time="2025-10-31T13:51:01.749373618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 13:51:01.749572 containerd[1578]: time="2025-10-31T13:51:01.749429029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 13:51:01.749801 kubelet[2712]: E1031 13:51:01.749760 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 13:51:01.750610 kubelet[2712]: E1031 13:51:01.750430 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 13:51:01.750718 kubelet[2712]: E1031 13:51:01.750612 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54d64b9b44-7kmqj_calico-system(fc3da513-331e-433f-b59a-3df653173d16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:01.750718 kubelet[2712]: E1031 13:51:01.750655 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" podUID="fc3da513-331e-433f-b59a-3df653173d16" Oct 31 13:51:01.750814 containerd[1578]: time="2025-10-31T13:51:01.750743882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 13:51:01.908429 systemd-networkd[1491]: cali6d31a05355b: Gained IPv6LL Oct 31 13:51:01.957026 containerd[1578]: time="2025-10-31T13:51:01.956869313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:01.957910 containerd[1578]: time="2025-10-31T13:51:01.957809974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 13:51:01.957910 containerd[1578]: time="2025-10-31T13:51:01.957888790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 13:51:01.958098 kubelet[2712]: E1031 13:51:01.958042 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:51:01.958098 kubelet[2712]: E1031 13:51:01.958087 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:51:01.958176 kubelet[2712]: E1031 13:51:01.958151 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-585cc8fbcc-rdm7t_calico-apiserver(dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:01.958200 kubelet[2712]: E1031 13:51:01.958181 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-rdm7t" podUID="dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca" Oct 31 13:51:02.315287 containerd[1578]: time="2025-10-31T13:51:02.315173237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-ddp7z,Uid:2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353,Namespace:calico-system,Attempt:0,}" Oct 31 13:51:02.316123 kubelet[2712]: E1031 13:51:02.316101 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:02.316459 containerd[1578]: time="2025-10-31T13:51:02.316405469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k556c,Uid:6f0df04e-9828-4f74-a5ef-4e403b9cca2d,Namespace:kube-system,Attempt:0,}" Oct 31 13:51:02.317166 kubelet[2712]: E1031 13:51:02.317140 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:02.317966 containerd[1578]: time="2025-10-31T13:51:02.317940558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwx75,Uid:8be1d4da-a7a8-4577-85a3-7fd88fd553c4,Namespace:kube-system,Attempt:0,}" Oct 31 13:51:02.436758 systemd-networkd[1491]: calie0c10e953a2: Link UP Oct 31 13:51:02.436926 systemd-networkd[1491]: calie0c10e953a2: Gained carrier Oct 31 13:51:02.449837 containerd[1578]: 2025-10-31 13:51:02.361 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--ddp7z-eth0 goldmane-7c778bb748- calico-system 2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353 842 0 2025-10-31 13:50:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-ddp7z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie0c10e953a2 [] [] }} ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Namespace="calico-system" Pod="goldmane-7c778bb748-ddp7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--ddp7z-" Oct 31 13:51:02.449837 containerd[1578]: 2025-10-31 13:51:02.361 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Namespace="calico-system" Pod="goldmane-7c778bb748-ddp7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" Oct 31 13:51:02.449837 containerd[1578]: 2025-10-31 13:51:02.394 [INFO][4506] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" HandleID="k8s-pod-network.07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Workload="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.395 [INFO][4506] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" HandleID="k8s-pod-network.07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Workload="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-ddp7z", "timestamp":"2025-10-31 13:51:02.394961847 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.395 [INFO][4506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.395 [INFO][4506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.395 [INFO][4506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.406 [INFO][4506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" host="localhost" Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.410 [INFO][4506] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.413 [INFO][4506] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.416 [INFO][4506] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.418 [INFO][4506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:02.450395 containerd[1578]: 2025-10-31 13:51:02.418 [INFO][4506] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" host="localhost" Oct 31 13:51:02.450820 containerd[1578]: 2025-10-31 13:51:02.419 [INFO][4506] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9 Oct 31 13:51:02.450820 containerd[1578]: 2025-10-31 13:51:02.423 [INFO][4506] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" host="localhost" Oct 31 13:51:02.450820 containerd[1578]: 2025-10-31 13:51:02.428 [INFO][4506] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" host="localhost" Oct 31 13:51:02.450820 containerd[1578]: 2025-10-31 13:51:02.428 [INFO][4506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" host="localhost" Oct 31 13:51:02.450820 containerd[1578]: 2025-10-31 13:51:02.428 [INFO][4506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:51:02.450820 containerd[1578]: 2025-10-31 13:51:02.428 [INFO][4506] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" HandleID="k8s-pod-network.07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Workload="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" Oct 31 13:51:02.450980 containerd[1578]: 2025-10-31 13:51:02.434 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Namespace="calico-system" Pod="goldmane-7c778bb748-ddp7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--ddp7z-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-ddp7z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie0c10e953a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:02.450980 containerd[1578]: 2025-10-31 13:51:02.434 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Namespace="calico-system" Pod="goldmane-7c778bb748-ddp7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" Oct 31 13:51:02.451054 containerd[1578]: 2025-10-31 13:51:02.434 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0c10e953a2 ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Namespace="calico-system" Pod="goldmane-7c778bb748-ddp7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" Oct 31 13:51:02.451054 containerd[1578]: 2025-10-31 13:51:02.437 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Namespace="calico-system" Pod="goldmane-7c778bb748-ddp7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" Oct 31 13:51:02.451156 containerd[1578]: 2025-10-31 13:51:02.437 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Namespace="calico-system" Pod="goldmane-7c778bb748-ddp7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--ddp7z-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9", Pod:"goldmane-7c778bb748-ddp7z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie0c10e953a2", MAC:"4e:73:84:28:89:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:02.451313 containerd[1578]: 2025-10-31 13:51:02.447 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" Namespace="calico-system" Pod="goldmane-7c778bb748-ddp7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--ddp7z-eth0" Oct 31 13:51:02.456413 kubelet[2712]: E1031 13:51:02.456360 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-rdm7t" podUID="dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca" Oct 31 13:51:02.458306 kubelet[2712]: E1031 13:51:02.458254 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" podUID="341ea4b7-59e9-45df-9dd6-88324f67c306" Oct 31 13:51:02.459224 kubelet[2712]: E1031 13:51:02.459182 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" podUID="fc3da513-331e-433f-b59a-3df653173d16" Oct 31 13:51:02.490483 containerd[1578]: time="2025-10-31T13:51:02.490437607Z" level=info msg="connecting to shim 07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9" address="unix:///run/containerd/s/60fbc74cbcc380062351ec7ffefee7bd6d76c194f4d931eeeafdf50aea1d4dd7" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:51:02.518453 systemd[1]: Started cri-containerd-07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9.scope - libcontainer container 07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9. Oct 31 13:51:02.534046 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:51:02.546599 systemd-networkd[1491]: cali6197eb791f0: Link UP Oct 31 13:51:02.546792 systemd-networkd[1491]: cali6197eb791f0: Gained carrier Oct 31 13:51:02.564117 containerd[1578]: 2025-10-31 13:51:02.365 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--kwx75-eth0 coredns-66bc5c9577- kube-system 8be1d4da-a7a8-4577-85a3-7fd88fd553c4 838 0 2025-10-31 13:50:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-kwx75 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6197eb791f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Namespace="kube-system" Pod="coredns-66bc5c9577-kwx75" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kwx75-" Oct 31 13:51:02.564117 containerd[1578]: 2025-10-31 13:51:02.365 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Namespace="kube-system" Pod="coredns-66bc5c9577-kwx75" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" Oct 31 13:51:02.564117 containerd[1578]: 2025-10-31 13:51:02.396 [INFO][4513] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" HandleID="k8s-pod-network.880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Workload="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.396 [INFO][4513] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" HandleID="k8s-pod-network.880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Workload="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137480), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-kwx75", "timestamp":"2025-10-31 13:51:02.396445166 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.396 [INFO][4513] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.428 [INFO][4513] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.428 [INFO][4513] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.508 [INFO][4513] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" host="localhost" Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.514 [INFO][4513] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.519 [INFO][4513] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.522 [INFO][4513] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.524 [INFO][4513] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:02.564928 containerd[1578]: 2025-10-31 13:51:02.524 [INFO][4513] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" host="localhost" Oct 31 13:51:02.565131 containerd[1578]: 2025-10-31 13:51:02.525 [INFO][4513] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da Oct 31 13:51:02.565131 containerd[1578]: 2025-10-31 13:51:02.529 [INFO][4513] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" host="localhost" Oct 31 13:51:02.565131 containerd[1578]: 2025-10-31 13:51:02.535 [INFO][4513] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" host="localhost" Oct 31 13:51:02.565131 containerd[1578]: 2025-10-31 13:51:02.535 [INFO][4513] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" host="localhost" Oct 31 13:51:02.565131 containerd[1578]: 2025-10-31 13:51:02.535 [INFO][4513] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:51:02.565131 containerd[1578]: 2025-10-31 13:51:02.535 [INFO][4513] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" HandleID="k8s-pod-network.880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Workload="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" Oct 31 13:51:02.565238 containerd[1578]: 2025-10-31 13:51:02.542 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Namespace="kube-system" Pod="coredns-66bc5c9577-kwx75" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kwx75-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8be1d4da-a7a8-4577-85a3-7fd88fd553c4", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-kwx75", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6197eb791f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:02.565238 containerd[1578]: 2025-10-31 13:51:02.543 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Namespace="kube-system" Pod="coredns-66bc5c9577-kwx75" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" Oct 31 13:51:02.565238 containerd[1578]: 2025-10-31 13:51:02.543 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6197eb791f0 ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Namespace="kube-system" Pod="coredns-66bc5c9577-kwx75" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" Oct 31 13:51:02.565238 containerd[1578]: 2025-10-31 13:51:02.546 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Namespace="kube-system" Pod="coredns-66bc5c9577-kwx75" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" Oct 31 13:51:02.565238 containerd[1578]: 2025-10-31 13:51:02.546 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Namespace="kube-system" Pod="coredns-66bc5c9577-kwx75" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kwx75-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8be1d4da-a7a8-4577-85a3-7fd88fd553c4", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da", Pod:"coredns-66bc5c9577-kwx75", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6197eb791f0", MAC:"1e:4b:6a:6c:18:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:02.565238 containerd[1578]: 2025-10-31 13:51:02.558 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" Namespace="kube-system" Pod="coredns-66bc5c9577-kwx75" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kwx75-eth0" Oct 31 13:51:02.588486 containerd[1578]: time="2025-10-31T13:51:02.588390754Z" level=info msg="connecting to shim 880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da" address="unix:///run/containerd/s/774ec7e1c0504129447c9d4c8d37f955079a59dcae809af7e628cc25fc387f7a" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:51:02.590421 containerd[1578]: time="2025-10-31T13:51:02.590385609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-ddp7z,Uid:2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353,Namespace:calico-system,Attempt:0,} returns sandbox id \"07eff21c3bea0154dacbf993fd1821ba1ecdca496cd62129d63d4afe95b42cb9\"" Oct 31 13:51:02.592168 containerd[1578]: time="2025-10-31T13:51:02.592062125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 13:51:02.620498 systemd[1]: Started cri-containerd-880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da.scope - libcontainer container 880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da. Oct 31 13:51:02.637403 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:51:02.641757 systemd-networkd[1491]: cali89e7ed6f767: Link UP Oct 31 13:51:02.642312 systemd-networkd[1491]: cali89e7ed6f767: Gained carrier Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.365 [INFO][4475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--k556c-eth0 coredns-66bc5c9577- kube-system 6f0df04e-9828-4f74-a5ef-4e403b9cca2d 836 0 2025-10-31 13:50:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-k556c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89e7ed6f767 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Namespace="kube-system" Pod="coredns-66bc5c9577-k556c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k556c-" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.365 [INFO][4475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Namespace="kube-system" Pod="coredns-66bc5c9577-k556c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k556c-eth0" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.396 [INFO][4515] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" HandleID="k8s-pod-network.4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Workload="localhost-k8s-coredns--66bc5c9577--k556c-eth0" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.396 [INFO][4515] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" HandleID="k8s-pod-network.4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Workload="localhost-k8s-coredns--66bc5c9577--k556c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001363f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-k556c", "timestamp":"2025-10-31 13:51:02.396262132 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.397 [INFO][4515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.535 [INFO][4515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.535 [INFO][4515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.607 [INFO][4515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" host="localhost" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.615 [INFO][4515] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.619 [INFO][4515] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.621 [INFO][4515] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.625 [INFO][4515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.625 [INFO][4515] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" host="localhost" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.627 [INFO][4515] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.631 [INFO][4515] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" host="localhost" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.637 [INFO][4515] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" host="localhost" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.637 [INFO][4515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" host="localhost" Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.637 [INFO][4515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:51:02.657586 containerd[1578]: 2025-10-31 13:51:02.637 [INFO][4515] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" HandleID="k8s-pod-network.4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Workload="localhost-k8s-coredns--66bc5c9577--k556c-eth0" Oct 31 13:51:02.658267 containerd[1578]: 2025-10-31 13:51:02.640 [INFO][4475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Namespace="kube-system" Pod="coredns-66bc5c9577-k556c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k556c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--k556c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6f0df04e-9828-4f74-a5ef-4e403b9cca2d", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-k556c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89e7ed6f767", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:02.658267 containerd[1578]: 2025-10-31 13:51:02.640 [INFO][4475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Namespace="kube-system" Pod="coredns-66bc5c9577-k556c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k556c-eth0" Oct 31 13:51:02.658267 containerd[1578]: 2025-10-31 13:51:02.640 [INFO][4475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89e7ed6f767 ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Namespace="kube-system" Pod="coredns-66bc5c9577-k556c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k556c-eth0" Oct 31 13:51:02.658267 containerd[1578]: 2025-10-31 13:51:02.642 [INFO][4475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Namespace="kube-system" Pod="coredns-66bc5c9577-k556c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k556c-eth0" Oct 31 13:51:02.658267 containerd[1578]: 2025-10-31 13:51:02.643 [INFO][4475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Namespace="kube-system" Pod="coredns-66bc5c9577-k556c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k556c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--k556c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6f0df04e-9828-4f74-a5ef-4e403b9cca2d", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c", Pod:"coredns-66bc5c9577-k556c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89e7ed6f767", MAC:"2a:98:e5:14:e7:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:02.658267 containerd[1578]: 2025-10-31 13:51:02.653 [INFO][4475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" Namespace="kube-system" Pod="coredns-66bc5c9577-k556c" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k556c-eth0" Oct 31 13:51:02.673744 containerd[1578]: time="2025-10-31T13:51:02.673694561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kwx75,Uid:8be1d4da-a7a8-4577-85a3-7fd88fd553c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da\"" Oct 31 13:51:02.674519 kubelet[2712]: E1031 13:51:02.674495 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:02.681004 containerd[1578]: time="2025-10-31T13:51:02.680969290Z" level=info msg="CreateContainer within sandbox \"880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 13:51:02.695345 containerd[1578]: time="2025-10-31T13:51:02.695263579Z" level=info msg="Container e890cf9c49ee2730bc8d13fd4cfda1e3d4bf552f2a5eec0b4cb4eef22b88f9e2: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:51:02.702035 containerd[1578]: time="2025-10-31T13:51:02.701979122Z" level=info msg="CreateContainer within sandbox \"880494a460bb2fcee3551a06e3b2d039799b831eea5dd8730312b2948c34c5da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e890cf9c49ee2730bc8d13fd4cfda1e3d4bf552f2a5eec0b4cb4eef22b88f9e2\"" Oct 31 13:51:02.703780 containerd[1578]: time="2025-10-31T13:51:02.703748655Z" level=info msg="StartContainer for \"e890cf9c49ee2730bc8d13fd4cfda1e3d4bf552f2a5eec0b4cb4eef22b88f9e2\"" Oct 31 13:51:02.704519 containerd[1578]: time="2025-10-31T13:51:02.704488474Z" level=info msg="connecting to shim 4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c" address="unix:///run/containerd/s/05c2555d1ae3dfa5e490a3d688c7724d9c74dcf0e4c97d5d7cf7e26345ad0264" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:51:02.705001 containerd[1578]: time="2025-10-31T13:51:02.704978766Z" level=info msg="connecting to shim e890cf9c49ee2730bc8d13fd4cfda1e3d4bf552f2a5eec0b4cb4eef22b88f9e2" address="unix:///run/containerd/s/774ec7e1c0504129447c9d4c8d37f955079a59dcae809af7e628cc25fc387f7a" protocol=ttrpc version=3 Oct 31 13:51:02.727631 systemd[1]: Started cri-containerd-e890cf9c49ee2730bc8d13fd4cfda1e3d4bf552f2a5eec0b4cb4eef22b88f9e2.scope - libcontainer container e890cf9c49ee2730bc8d13fd4cfda1e3d4bf552f2a5eec0b4cb4eef22b88f9e2. Oct 31 13:51:02.730622 systemd[1]: Started cri-containerd-4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c.scope - libcontainer container 4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c. Oct 31 13:51:02.746248 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:51:02.764217 containerd[1578]: time="2025-10-31T13:51:02.763552465Z" level=info msg="StartContainer for \"e890cf9c49ee2730bc8d13fd4cfda1e3d4bf552f2a5eec0b4cb4eef22b88f9e2\" returns successfully" Oct 31 13:51:02.776548 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:53898.service - OpenSSH per-connection server daemon (10.0.0.1:53898). Oct 31 13:51:02.782516 containerd[1578]: time="2025-10-31T13:51:02.782359363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k556c,Uid:6f0df04e-9828-4f74-a5ef-4e403b9cca2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c\"" Oct 31 13:51:02.783472 kubelet[2712]: E1031 13:51:02.783450 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:02.787601 containerd[1578]: time="2025-10-31T13:51:02.787549859Z" level=info msg="CreateContainer within sandbox \"4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 13:51:02.795573 containerd[1578]: time="2025-10-31T13:51:02.795516958Z" level=info msg="Container 1cdb1b9bccf9c76c159ad3738f52c70c354dbf021d73f61028e4f09bbe2fb8b8: CDI devices from CRI Config.CDIDevices: []" Oct 31 13:51:02.803857 containerd[1578]: time="2025-10-31T13:51:02.803621562Z" level=info msg="CreateContainer within sandbox \"4a16bcf2bc77ef4c82829c36c16c062707dbb6679265e5465cf8bebc032a390c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1cdb1b9bccf9c76c159ad3738f52c70c354dbf021d73f61028e4f09bbe2fb8b8\"" Oct 31 13:51:02.804734 containerd[1578]: time="2025-10-31T13:51:02.804695804Z" level=info msg="StartContainer for \"1cdb1b9bccf9c76c159ad3738f52c70c354dbf021d73f61028e4f09bbe2fb8b8\"" Oct 31 13:51:02.807124 containerd[1578]: time="2025-10-31T13:51:02.807081773Z" level=info msg="connecting to shim 1cdb1b9bccf9c76c159ad3738f52c70c354dbf021d73f61028e4f09bbe2fb8b8" address="unix:///run/containerd/s/05c2555d1ae3dfa5e490a3d688c7724d9c74dcf0e4c97d5d7cf7e26345ad0264" protocol=ttrpc version=3 Oct 31 13:51:02.807576 containerd[1578]: time="2025-10-31T13:51:02.807454683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:02.811617 containerd[1578]: time="2025-10-31T13:51:02.811579900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 13:51:02.811754 containerd[1578]: time="2025-10-31T13:51:02.811610625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 13:51:02.811952 kubelet[2712]: E1031 13:51:02.811915 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 13:51:02.812059 kubelet[2712]: E1031 13:51:02.811963 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 13:51:02.812205 kubelet[2712]: E1031 13:51:02.812118 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-ddp7z_calico-system(2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:02.812205 kubelet[2712]: E1031 13:51:02.812170 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ddp7z" podUID="2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353" Oct 31 13:51:02.849476 systemd[1]: Started cri-containerd-1cdb1b9bccf9c76c159ad3738f52c70c354dbf021d73f61028e4f09bbe2fb8b8.scope - libcontainer container 1cdb1b9bccf9c76c159ad3738f52c70c354dbf021d73f61028e4f09bbe2fb8b8. Oct 31 13:51:02.863839 sshd[4736]: Accepted publickey for core from 10.0.0.1 port 53898 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:02.864997 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:02.868376 systemd-networkd[1491]: calid32eb244313: Gained IPv6LL Oct 31 13:51:02.872765 systemd-logind[1551]: New session 9 of user core. Oct 31 13:51:02.879635 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 13:51:02.895680 containerd[1578]: time="2025-10-31T13:51:02.895639472Z" level=info msg="StartContainer for \"1cdb1b9bccf9c76c159ad3738f52c70c354dbf021d73f61028e4f09bbe2fb8b8\" returns successfully" Oct 31 13:51:02.933464 systemd-networkd[1491]: caliaeb4670b158: Gained IPv6LL Oct 31 13:51:03.064164 sshd[4763]: Connection closed by 10.0.0.1 port 53898 Oct 31 13:51:03.064118 sshd-session[4736]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:03.068008 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. Oct 31 13:51:03.068255 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:53898.service: Deactivated successfully. Oct 31 13:51:03.070144 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 13:51:03.071964 systemd-logind[1551]: Removed session 9. Oct 31 13:51:03.462230 kubelet[2712]: E1031 13:51:03.461989 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:03.473530 kubelet[2712]: E1031 13:51:03.473492 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:03.476768 kubelet[2712]: E1031 13:51:03.476699 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ddp7z" podUID="2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353" Oct 31 13:51:03.476768 kubelet[2712]: E1031 13:51:03.476702 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-rdm7t" podUID="dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca" Oct 31 13:51:03.478818 kubelet[2712]: E1031 13:51:03.478768 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" podUID="fc3da513-331e-433f-b59a-3df653173d16" Oct 31 13:51:03.482126 kubelet[2712]: I1031 13:51:03.481158 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k556c" podStartSLOduration=38.48112909 podStartE2EDuration="38.48112909s" podCreationTimestamp="2025-10-31 13:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:51:03.479640297 +0000 UTC m=+46.257020612" watchObservedRunningTime="2025-10-31 13:51:03.48112909 +0000 UTC m=+46.258509365" Oct 31 13:51:03.511201 kubelet[2712]: I1031 13:51:03.511128 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kwx75" podStartSLOduration=38.511111323 podStartE2EDuration="38.511111323s" podCreationTimestamp="2025-10-31 13:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 13:51:03.510724572 +0000 UTC m=+46.288104847" watchObservedRunningTime="2025-10-31 13:51:03.511111323 +0000 UTC m=+46.288491598" Oct 31 13:51:04.148481 systemd-networkd[1491]: calie0c10e953a2: Gained IPv6LL Oct 31 13:51:04.149304 systemd-networkd[1491]: cali89e7ed6f767: Gained IPv6LL Oct 31 13:51:04.315290 containerd[1578]: time="2025-10-31T13:51:04.315236113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msghp,Uid:fda5ab0e-82e2-4b7d-827a-809d2fbca767,Namespace:calico-system,Attempt:0,}" Oct 31 13:51:04.404636 systemd-networkd[1491]: cali6197eb791f0: Gained IPv6LL Oct 31 13:51:04.424622 systemd-networkd[1491]: calica1d37065f5: Link UP Oct 31 13:51:04.426626 systemd-networkd[1491]: calica1d37065f5: Gained carrier Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.359 [INFO][4793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--msghp-eth0 csi-node-driver- calico-system fda5ab0e-82e2-4b7d-827a-809d2fbca767 719 0 2025-10-31 13:50:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-msghp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calica1d37065f5 [] [] }} ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Namespace="calico-system" Pod="csi-node-driver-msghp" WorkloadEndpoint="localhost-k8s-csi--node--driver--msghp-" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.360 [INFO][4793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Namespace="calico-system" Pod="csi-node-driver-msghp" WorkloadEndpoint="localhost-k8s-csi--node--driver--msghp-eth0" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.382 [INFO][4808] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" HandleID="k8s-pod-network.9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Workload="localhost-k8s-csi--node--driver--msghp-eth0" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.382 [INFO][4808] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" HandleID="k8s-pod-network.9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Workload="localhost-k8s-csi--node--driver--msghp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400050eaa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-msghp", "timestamp":"2025-10-31 13:51:04.382134946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.382 [INFO][4808] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.382 [INFO][4808] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.382 [INFO][4808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.395 [INFO][4808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" host="localhost" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.400 [INFO][4808] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.404 [INFO][4808] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.407 [INFO][4808] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.409 [INFO][4808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.409 [INFO][4808] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" host="localhost" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.410 [INFO][4808] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.413 [INFO][4808] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" host="localhost" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.419 [INFO][4808] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" host="localhost" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.419 [INFO][4808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" host="localhost" Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.419 [INFO][4808] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 13:51:04.454081 containerd[1578]: 2025-10-31 13:51:04.419 [INFO][4808] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" HandleID="k8s-pod-network.9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Workload="localhost-k8s-csi--node--driver--msghp-eth0" Oct 31 13:51:04.454625 containerd[1578]: 2025-10-31 13:51:04.422 [INFO][4793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Namespace="calico-system" Pod="csi-node-driver-msghp" WorkloadEndpoint="localhost-k8s-csi--node--driver--msghp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--msghp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fda5ab0e-82e2-4b7d-827a-809d2fbca767", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-msghp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica1d37065f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:04.454625 containerd[1578]: 2025-10-31 13:51:04.422 [INFO][4793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Namespace="calico-system" Pod="csi-node-driver-msghp" WorkloadEndpoint="localhost-k8s-csi--node--driver--msghp-eth0" Oct 31 13:51:04.454625 containerd[1578]: 2025-10-31 13:51:04.422 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica1d37065f5 ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Namespace="calico-system" Pod="csi-node-driver-msghp" WorkloadEndpoint="localhost-k8s-csi--node--driver--msghp-eth0" Oct 31 13:51:04.454625 containerd[1578]: 2025-10-31 13:51:04.426 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Namespace="calico-system" Pod="csi-node-driver-msghp" WorkloadEndpoint="localhost-k8s-csi--node--driver--msghp-eth0" Oct 31 13:51:04.454625 containerd[1578]: 2025-10-31 13:51:04.428 [INFO][4793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Namespace="calico-system" Pod="csi-node-driver-msghp" WorkloadEndpoint="localhost-k8s-csi--node--driver--msghp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--msghp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fda5ab0e-82e2-4b7d-827a-809d2fbca767", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 13, 50, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca", Pod:"csi-node-driver-msghp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica1d37065f5", MAC:"56:5b:2b:60:18:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 13:51:04.454625 containerd[1578]: 2025-10-31 13:51:04.451 [INFO][4793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" Namespace="calico-system" Pod="csi-node-driver-msghp" WorkloadEndpoint="localhost-k8s-csi--node--driver--msghp-eth0" Oct 31 13:51:04.477981 kubelet[2712]: E1031 13:51:04.476583 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:04.477981 kubelet[2712]: E1031 13:51:04.476676 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:04.478723 kubelet[2712]: E1031 13:51:04.478651 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ddp7z" podUID="2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353" Oct 31 13:51:04.485506 containerd[1578]: time="2025-10-31T13:51:04.485407642Z" level=info msg="connecting to shim 9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca" address="unix:///run/containerd/s/7d170115f63577d4d81916d39c26e918b2188b47cfa464ca094a40a5f5fbff28" namespace=k8s.io protocol=ttrpc version=3 Oct 31 13:51:04.516462 systemd[1]: Started cri-containerd-9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca.scope - libcontainer container 9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca. Oct 31 13:51:04.529110 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 13:51:04.548840 containerd[1578]: time="2025-10-31T13:51:04.548798724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msghp,Uid:fda5ab0e-82e2-4b7d-827a-809d2fbca767,Namespace:calico-system,Attempt:0,} returns sandbox id \"9436ae8427f841852d79a8f5dae4dc3d1eb799fb2b7b47e54cb473c6bd7d85ca\"" Oct 31 13:51:04.550548 containerd[1578]: time="2025-10-31T13:51:04.550520354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 13:51:04.780769 containerd[1578]: time="2025-10-31T13:51:04.780641746Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:04.782176 containerd[1578]: time="2025-10-31T13:51:04.782119252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 13:51:04.782257 containerd[1578]: time="2025-10-31T13:51:04.782230032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 13:51:04.782672 kubelet[2712]: E1031 13:51:04.782385 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 13:51:04.782672 kubelet[2712]: E1031 13:51:04.782433 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 13:51:04.782672 kubelet[2712]: E1031 13:51:04.782514 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-msghp_calico-system(fda5ab0e-82e2-4b7d-827a-809d2fbca767): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:04.783426 containerd[1578]: time="2025-10-31T13:51:04.783397362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 13:51:04.996664 containerd[1578]: time="2025-10-31T13:51:04.996561744Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:04.997470 containerd[1578]: time="2025-10-31T13:51:04.997432021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 13:51:04.997593 containerd[1578]: time="2025-10-31T13:51:04.997504233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 13:51:04.997942 kubelet[2712]: E1031 13:51:04.997695 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 13:51:04.997942 kubelet[2712]: E1031 13:51:04.997761 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 13:51:04.997942 kubelet[2712]: E1031 13:51:04.997842 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-msghp_calico-system(fda5ab0e-82e2-4b7d-827a-809d2fbca767): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:04.998116 kubelet[2712]: E1031 13:51:04.997880 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:51:05.480306 kubelet[2712]: E1031 13:51:05.480207 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:05.480871 kubelet[2712]: E1031 13:51:05.480579 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:05.482691 kubelet[2712]: E1031 13:51:05.482652 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:51:06.197430 systemd-networkd[1491]: calica1d37065f5: Gained IPv6LL Oct 31 13:51:06.482635 kubelet[2712]: E1031 13:51:06.482521 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:51:08.078183 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:53900.service - OpenSSH per-connection server daemon (10.0.0.1:53900). Oct 31 13:51:08.143860 sshd[4881]: Accepted publickey for core from 10.0.0.1 port 53900 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:08.145231 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:08.150437 systemd-logind[1551]: New session 10 of user core. Oct 31 13:51:08.159445 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 13:51:08.260980 sshd[4884]: Connection closed by 10.0.0.1 port 53900 Oct 31 13:51:08.260900 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:08.267741 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:53900.service: Deactivated successfully. Oct 31 13:51:08.269696 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 13:51:08.270689 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. Oct 31 13:51:08.273803 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:53908.service - OpenSSH per-connection server daemon (10.0.0.1:53908). Oct 31 13:51:08.274574 systemd-logind[1551]: Removed session 10. Oct 31 13:51:08.314361 containerd[1578]: time="2025-10-31T13:51:08.314257038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 13:51:08.342150 sshd[4899]: Accepted publickey for core from 10.0.0.1 port 53908 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:08.343460 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:08.349723 systemd-logind[1551]: New session 11 of user core. Oct 31 13:51:08.360480 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 13:51:08.522143 containerd[1578]: time="2025-10-31T13:51:08.522088431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:08.524772 containerd[1578]: time="2025-10-31T13:51:08.524730311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 13:51:08.524944 containerd[1578]: time="2025-10-31T13:51:08.524808684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 13:51:08.525071 kubelet[2712]: E1031 13:51:08.524931 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 13:51:08.525380 kubelet[2712]: E1031 13:51:08.525078 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 13:51:08.525380 kubelet[2712]: E1031 13:51:08.525153 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-69d9d786b6-2rbvr_calico-system(87d6c415-1e44-453b-af0a-7b2c40a8254b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:08.526207 containerd[1578]: time="2025-10-31T13:51:08.526166509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 13:51:08.553616 sshd[4906]: Connection closed by 10.0.0.1 port 53908 Oct 31 13:51:08.554368 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:08.568563 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:53908.service: Deactivated successfully. Oct 31 13:51:08.571955 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 13:51:08.575062 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. Oct 31 13:51:08.580372 systemd-logind[1551]: Removed session 11. Oct 31 13:51:08.581128 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:53918.service - OpenSSH per-connection server daemon (10.0.0.1:53918). Oct 31 13:51:08.638186 sshd[4920]: Accepted publickey for core from 10.0.0.1 port 53918 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:08.639331 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:08.643884 systemd-logind[1551]: New session 12 of user core. Oct 31 13:51:08.648443 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 13:51:08.728541 sshd[4923]: Connection closed by 10.0.0.1 port 53918 Oct 31 13:51:08.728401 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:08.732342 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:53918.service: Deactivated successfully. Oct 31 13:51:08.733115 containerd[1578]: time="2025-10-31T13:51:08.733077189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:08.734457 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 13:51:08.735250 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. Oct 31 13:51:08.736238 systemd-logind[1551]: Removed session 12. Oct 31 13:51:08.744283 containerd[1578]: time="2025-10-31T13:51:08.744226203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 13:51:08.744375 containerd[1578]: time="2025-10-31T13:51:08.744319018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 13:51:08.744513 kubelet[2712]: E1031 13:51:08.744480 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 13:51:08.744859 kubelet[2712]: E1031 13:51:08.744654 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 13:51:08.744859 kubelet[2712]: E1031 13:51:08.744733 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-69d9d786b6-2rbvr_calico-system(87d6c415-1e44-453b-af0a-7b2c40a8254b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:08.744859 kubelet[2712]: E1031 13:51:08.744771 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d9d786b6-2rbvr" podUID="87d6c415-1e44-453b-af0a-7b2c40a8254b" Oct 31 13:51:13.747189 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:42548.service - OpenSSH per-connection server daemon (10.0.0.1:42548). Oct 31 13:51:13.805665 sshd[4938]: Accepted publickey for core from 10.0.0.1 port 42548 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:13.806823 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:13.811000 systemd-logind[1551]: New session 13 of user core. Oct 31 13:51:13.830487 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 13:51:13.923948 sshd[4941]: Connection closed by 10.0.0.1 port 42548 Oct 31 13:51:13.924112 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:13.933512 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:42548.service: Deactivated successfully. Oct 31 13:51:13.935265 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 13:51:13.937141 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. Oct 31 13:51:13.940098 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:42560.service - OpenSSH per-connection server daemon (10.0.0.1:42560). Oct 31 13:51:13.941720 systemd-logind[1551]: Removed session 13. Oct 31 13:51:13.995060 sshd[4955]: Accepted publickey for core from 10.0.0.1 port 42560 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:13.996568 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:14.001575 systemd-logind[1551]: New session 14 of user core. Oct 31 13:51:14.016453 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 13:51:14.172208 sshd[4958]: Connection closed by 10.0.0.1 port 42560 Oct 31 13:51:14.172891 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:14.181447 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:42560.service: Deactivated successfully. Oct 31 13:51:14.183022 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 13:51:14.183745 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. Oct 31 13:51:14.186938 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:42570.service - OpenSSH per-connection server daemon (10.0.0.1:42570). Oct 31 13:51:14.187661 systemd-logind[1551]: Removed session 14. Oct 31 13:51:14.249365 sshd[4970]: Accepted publickey for core from 10.0.0.1 port 42570 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:14.250550 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:14.254254 systemd-logind[1551]: New session 15 of user core. Oct 31 13:51:14.264436 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 13:51:14.314974 containerd[1578]: time="2025-10-31T13:51:14.314909525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 13:51:14.536012 containerd[1578]: time="2025-10-31T13:51:14.535905827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:14.537462 containerd[1578]: time="2025-10-31T13:51:14.537414735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 13:51:14.537547 containerd[1578]: time="2025-10-31T13:51:14.537485386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 13:51:14.537745 kubelet[2712]: E1031 13:51:14.537705 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 13:51:14.538578 kubelet[2712]: E1031 13:51:14.538369 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 13:51:14.538578 kubelet[2712]: E1031 13:51:14.538511 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54d64b9b44-7kmqj_calico-system(fc3da513-331e-433f-b59a-3df653173d16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:14.538578 kubelet[2712]: E1031 13:51:14.538545 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" podUID="fc3da513-331e-433f-b59a-3df653173d16" Oct 31 13:51:14.774783 sshd[4973]: Connection closed by 10.0.0.1 port 42570 Oct 31 13:51:14.776065 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:14.783460 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:42570.service: Deactivated successfully. Oct 31 13:51:14.785156 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 13:51:14.785906 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. Oct 31 13:51:14.790572 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:42576.service - OpenSSH per-connection server daemon (10.0.0.1:42576). Oct 31 13:51:14.791140 systemd-logind[1551]: Removed session 15. Oct 31 13:51:14.846851 sshd[4994]: Accepted publickey for core from 10.0.0.1 port 42576 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:14.848362 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:14.852233 systemd-logind[1551]: New session 16 of user core. Oct 31 13:51:14.862491 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 13:51:15.070397 sshd[4997]: Connection closed by 10.0.0.1 port 42576 Oct 31 13:51:15.070915 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:15.081004 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:42576.service: Deactivated successfully. Oct 31 13:51:15.083697 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 13:51:15.084822 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. Oct 31 13:51:15.087791 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:42580.service - OpenSSH per-connection server daemon (10.0.0.1:42580). Oct 31 13:51:15.089359 systemd-logind[1551]: Removed session 16. Oct 31 13:51:15.143485 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 42580 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:15.144614 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:15.150031 systemd-logind[1551]: New session 17 of user core. Oct 31 13:51:15.164423 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 13:51:15.263449 sshd[5011]: Connection closed by 10.0.0.1 port 42580 Oct 31 13:51:15.263974 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:15.267892 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:42580.service: Deactivated successfully. Oct 31 13:51:15.270023 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 13:51:15.270847 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. Oct 31 13:51:15.271766 systemd-logind[1551]: Removed session 17. Oct 31 13:51:15.313501 containerd[1578]: time="2025-10-31T13:51:15.313468027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 13:51:15.555611 containerd[1578]: time="2025-10-31T13:51:15.555546525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:15.556478 containerd[1578]: time="2025-10-31T13:51:15.556440539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 13:51:15.556615 containerd[1578]: time="2025-10-31T13:51:15.556520031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 13:51:15.556706 kubelet[2712]: E1031 13:51:15.556652 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:51:15.556907 kubelet[2712]: E1031 13:51:15.556715 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:51:15.556907 kubelet[2712]: E1031 13:51:15.556795 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-585cc8fbcc-v9wdq_calico-apiserver(341ea4b7-59e9-45df-9dd6-88324f67c306): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:15.556907 kubelet[2712]: E1031 13:51:15.556824 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" podUID="341ea4b7-59e9-45df-9dd6-88324f67c306" Oct 31 13:51:17.315330 containerd[1578]: time="2025-10-31T13:51:17.315072007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 13:51:17.514638 containerd[1578]: time="2025-10-31T13:51:17.514433291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:17.515447 containerd[1578]: time="2025-10-31T13:51:17.515361027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 13:51:17.515447 containerd[1578]: time="2025-10-31T13:51:17.515426956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 13:51:17.515598 kubelet[2712]: E1031 13:51:17.515538 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 13:51:17.515598 kubelet[2712]: E1031 13:51:17.515586 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 13:51:17.515995 containerd[1578]: time="2025-10-31T13:51:17.515900585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 13:51:17.516338 kubelet[2712]: E1031 13:51:17.516052 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-msghp_calico-system(fda5ab0e-82e2-4b7d-827a-809d2fbca767): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:17.741420 containerd[1578]: time="2025-10-31T13:51:17.741361317Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:17.742795 containerd[1578]: time="2025-10-31T13:51:17.742757641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 13:51:17.743096 containerd[1578]: time="2025-10-31T13:51:17.743051644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 13:51:17.743319 kubelet[2712]: E1031 13:51:17.743259 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:51:17.743584 kubelet[2712]: E1031 13:51:17.743415 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 13:51:17.743648 kubelet[2712]: E1031 13:51:17.743599 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-585cc8fbcc-rdm7t_calico-apiserver(dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:17.743675 kubelet[2712]: E1031 13:51:17.743651 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-rdm7t" podUID="dcd2d84c-2d0d-4ab4-85c1-df6fb5617eca" Oct 31 13:51:17.743803 containerd[1578]: time="2025-10-31T13:51:17.743753706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 13:51:17.955023 containerd[1578]: time="2025-10-31T13:51:17.954970840Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:17.955899 containerd[1578]: time="2025-10-31T13:51:17.955771517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 13:51:17.955964 containerd[1578]: time="2025-10-31T13:51:17.955841207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 13:51:17.956146 kubelet[2712]: E1031 13:51:17.956091 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 13:51:17.956225 kubelet[2712]: E1031 13:51:17.956157 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 13:51:17.956346 kubelet[2712]: E1031 13:51:17.956227 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-msghp_calico-system(fda5ab0e-82e2-4b7d-827a-809d2fbca767): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:17.956346 kubelet[2712]: E1031 13:51:17.956265 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767" Oct 31 13:51:18.315319 containerd[1578]: time="2025-10-31T13:51:18.315178002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 13:51:18.529481 containerd[1578]: time="2025-10-31T13:51:18.529438314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 13:51:18.530593 containerd[1578]: time="2025-10-31T13:51:18.530541513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 13:51:18.530593 containerd[1578]: time="2025-10-31T13:51:18.530583639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 13:51:18.530791 kubelet[2712]: E1031 13:51:18.530749 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 13:51:18.531005 kubelet[2712]: E1031 13:51:18.530801 2712 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 13:51:18.531005 kubelet[2712]: E1031 13:51:18.530881 2712 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-ddp7z_calico-system(2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 13:51:18.531005 kubelet[2712]: E1031 13:51:18.530914 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ddp7z" podUID="2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353" Oct 31 13:51:20.276649 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:42512.service - OpenSSH per-connection server daemon (10.0.0.1:42512). Oct 31 13:51:20.342140 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 42512 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:20.343654 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:20.347769 systemd-logind[1551]: New session 18 of user core. Oct 31 13:51:20.365439 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 13:51:20.438307 sshd[5040]: Connection closed by 10.0.0.1 port 42512 Oct 31 13:51:20.438640 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:20.442097 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:42512.service: Deactivated successfully. Oct 31 13:51:20.444830 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 13:51:20.445647 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. Oct 31 13:51:20.446716 systemd-logind[1551]: Removed session 18. Oct 31 13:51:22.314839 kubelet[2712]: E1031 13:51:22.314774 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69d9d786b6-2rbvr" podUID="87d6c415-1e44-453b-af0a-7b2c40a8254b" Oct 31 13:51:25.313310 kubelet[2712]: E1031 13:51:25.312904 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54d64b9b44-7kmqj" podUID="fc3da513-331e-433f-b59a-3df653173d16" Oct 31 13:51:25.451065 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:42522.service - OpenSSH per-connection server daemon (10.0.0.1:42522). Oct 31 13:51:25.507482 sshd[5075]: Accepted publickey for core from 10.0.0.1 port 42522 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:25.509683 containerd[1578]: time="2025-10-31T13:51:25.509317511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72828858193986e6048b6d0d16cffb420a48d78abf851400e9c049885b2dded2\" id:\"37be8a5e824af15e97891afefaa7d3d8e6e8173519dca93849e608de0d4c4cb9\" pid:5069 exited_at:{seconds:1761918685 nanos:508305610}" Oct 31 13:51:25.509470 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:25.512061 kubelet[2712]: E1031 13:51:25.512011 2712 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 13:51:25.516530 systemd-logind[1551]: New session 19 of user core. Oct 31 13:51:25.521520 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 13:51:25.600757 sshd[5084]: Connection closed by 10.0.0.1 port 42522 Oct 31 13:51:25.601583 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:25.605953 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:42522.service: Deactivated successfully. Oct 31 13:51:25.608830 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 13:51:25.609598 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. Oct 31 13:51:25.610490 systemd-logind[1551]: Removed session 19. Oct 31 13:51:27.314185 kubelet[2712]: E1031 13:51:27.314133 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-585cc8fbcc-v9wdq" podUID="341ea4b7-59e9-45df-9dd6-88324f67c306" Oct 31 13:51:29.313492 kubelet[2712]: E1031 13:51:29.313430 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-ddp7z" podUID="2bd4fb9f-2b11-4c32-9aaf-f7e5c672a353" Oct 31 13:51:30.613862 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:34466.service - OpenSSH per-connection server daemon (10.0.0.1:34466). Oct 31 13:51:30.663203 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 34466 ssh2: RSA SHA256:fl6UvGPMFXOUdW/5quuz5ILUlbV+nzHw3IOnh9DAFiY Oct 31 13:51:30.662174 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 13:51:30.673905 systemd-logind[1551]: New session 20 of user core. Oct 31 13:51:30.679435 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 13:51:30.756449 sshd[5104]: Connection closed by 10.0.0.1 port 34466 Oct 31 13:51:30.757004 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Oct 31 13:51:30.761402 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:34466.service: Deactivated successfully. Oct 31 13:51:30.763491 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 13:51:30.764329 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. Oct 31 13:51:30.765349 systemd-logind[1551]: Removed session 20. Oct 31 13:51:31.314471 kubelet[2712]: E1031 13:51:31.314419 2712 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msghp" podUID="fda5ab0e-82e2-4b7d-827a-809d2fbca767"