Oct 29 23:30:39.385654 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 29 23:30:39.385679 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Oct 29 22:08:13 -00 2025 Oct 29 23:30:39.385688 kernel: KASLR enabled Oct 29 23:30:39.385694 kernel: efi: EFI v2.7 by EDK II Oct 29 23:30:39.385700 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 29 23:30:39.385705 kernel: random: crng init done Oct 29 23:30:39.385712 kernel: secureboot: Secure boot disabled Oct 29 23:30:39.385718 kernel: ACPI: Early table checksum verification disabled Oct 29 23:30:39.385726 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 29 23:30:39.385732 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 29 23:30:39.385738 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:30:39.385744 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:30:39.385750 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:30:39.385756 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:30:39.385765 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:30:39.385771 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:30:39.385778 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:30:39.385784 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:30:39.385791 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:30:39.385797 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 29 23:30:39.385803 kernel: ACPI: Use ACPI SPCR as default console: No Oct 29 23:30:39.385810 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 23:30:39.385817 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 29 23:30:39.385824 kernel: Zone ranges: Oct 29 23:30:39.385830 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 23:30:39.385837 kernel: DMA32 empty Oct 29 23:30:39.385843 kernel: Normal empty Oct 29 23:30:39.385849 kernel: Device empty Oct 29 23:30:39.385856 kernel: Movable zone start for each node Oct 29 23:30:39.385862 kernel: Early memory node ranges Oct 29 23:30:39.385868 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 29 23:30:39.385875 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 29 23:30:39.385881 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 29 23:30:39.385888 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 29 23:30:39.385895 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 29 23:30:39.385902 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 29 23:30:39.385910 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 29 23:30:39.385916 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 29 23:30:39.385922 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 29 23:30:39.385929 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 29 23:30:39.385941 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 29 23:30:39.385956 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 29 23:30:39.385964 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 29 23:30:39.385971 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 23:30:39.385991 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 29 23:30:39.385999 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 29 23:30:39.386006 kernel: psci: probing for conduit method from ACPI. Oct 29 23:30:39.386012 kernel: psci: PSCIv1.1 detected in firmware. Oct 29 23:30:39.386022 kernel: psci: Using standard PSCI v0.2 function IDs Oct 29 23:30:39.386029 kernel: psci: Trusted OS migration not required Oct 29 23:30:39.386036 kernel: psci: SMC Calling Convention v1.1 Oct 29 23:30:39.386043 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 29 23:30:39.386050 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 29 23:30:39.386056 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 29 23:30:39.386063 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 29 23:30:39.386070 kernel: Detected PIPT I-cache on CPU0 Oct 29 23:30:39.386077 kernel: CPU features: detected: GIC system register CPU interface Oct 29 23:30:39.386084 kernel: CPU features: detected: Spectre-v4 Oct 29 23:30:39.386091 kernel: CPU features: detected: Spectre-BHB Oct 29 23:30:39.386099 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 29 23:30:39.386106 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 29 23:30:39.386113 kernel: CPU features: detected: ARM erratum 1418040 Oct 29 23:30:39.386120 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 29 23:30:39.386127 kernel: alternatives: applying boot alternatives Oct 29 23:30:39.386135 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3f6e690ee2ade37dd6082d1ad3b53d2d12b3a76b4644e8ca271364e3a8c31ac Oct 29 23:30:39.386143 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 23:30:39.386150 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 23:30:39.386157 kernel: Fallback order for Node 0: 0 Oct 29 23:30:39.386164 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 29 23:30:39.386172 kernel: Policy zone: DMA Oct 29 23:30:39.386179 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 23:30:39.386186 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 29 23:30:39.386193 kernel: software IO TLB: area num 4. Oct 29 23:30:39.386200 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 29 23:30:39.386207 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 29 23:30:39.386214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 23:30:39.386221 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 23:30:39.386229 kernel: rcu: RCU event tracing is enabled. Oct 29 23:30:39.386236 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 23:30:39.386243 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 23:30:39.386251 kernel: Tracing variant of Tasks RCU enabled. Oct 29 23:30:39.386259 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 23:30:39.386266 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 23:30:39.386273 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 23:30:39.386280 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 23:30:39.386287 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 29 23:30:39.386293 kernel: GICv3: 256 SPIs implemented Oct 29 23:30:39.386301 kernel: GICv3: 0 Extended SPIs implemented Oct 29 23:30:39.386308 kernel: Root IRQ handler: gic_handle_irq Oct 29 23:30:39.386314 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 29 23:30:39.386321 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 29 23:30:39.386329 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 29 23:30:39.386336 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 29 23:30:39.386343 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 29 23:30:39.386362 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 29 23:30:39.386369 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 29 23:30:39.386376 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 29 23:30:39.386384 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 23:30:39.386391 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 23:30:39.386398 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 29 23:30:39.386405 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 29 23:30:39.386412 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 29 23:30:39.386420 kernel: arm-pv: using stolen time PV Oct 29 23:30:39.386428 kernel: Console: colour dummy device 80x25 Oct 29 23:30:39.386436 kernel: ACPI: Core revision 20240827 Oct 29 23:30:39.386443 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 29 23:30:39.386450 kernel: pid_max: default: 32768 minimum: 301 Oct 29 23:30:39.386458 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 23:30:39.386465 kernel: landlock: Up and running. Oct 29 23:30:39.386472 kernel: SELinux: Initializing. Oct 29 23:30:39.386481 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 23:30:39.386488 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 23:30:39.386496 kernel: rcu: Hierarchical SRCU implementation. Oct 29 23:30:39.386503 kernel: rcu: Max phase no-delay instances is 400. Oct 29 23:30:39.386511 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 23:30:39.386518 kernel: Remapping and enabling EFI services. Oct 29 23:30:39.386525 kernel: smp: Bringing up secondary CPUs ... Oct 29 23:30:39.386534 kernel: Detected PIPT I-cache on CPU1 Oct 29 23:30:39.386546 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 29 23:30:39.386555 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 29 23:30:39.386563 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 23:30:39.386571 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 29 23:30:39.386579 kernel: Detected PIPT I-cache on CPU2 Oct 29 23:30:39.386586 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 29 23:30:39.386595 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 29 23:30:39.386603 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 23:30:39.386611 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 29 23:30:39.386618 kernel: Detected PIPT I-cache on CPU3 Oct 29 23:30:39.386626 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 29 23:30:39.386633 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 29 23:30:39.386641 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 23:30:39.386650 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 29 23:30:39.386657 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 23:30:39.386665 kernel: SMP: Total of 4 processors activated. Oct 29 23:30:39.386672 kernel: CPU: All CPU(s) started at EL1 Oct 29 23:30:39.386680 kernel: CPU features: detected: 32-bit EL0 Support Oct 29 23:30:39.386687 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 29 23:30:39.386695 kernel: CPU features: detected: Common not Private translations Oct 29 23:30:39.386703 kernel: CPU features: detected: CRC32 instructions Oct 29 23:30:39.386711 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 29 23:30:39.386718 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 29 23:30:39.386726 kernel: CPU features: detected: LSE atomic instructions Oct 29 23:30:39.386733 kernel: CPU features: detected: Privileged Access Never Oct 29 23:30:39.386740 kernel: CPU features: detected: RAS Extension Support Oct 29 23:30:39.386748 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 29 23:30:39.386755 kernel: alternatives: applying system-wide alternatives Oct 29 23:30:39.386764 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 29 23:30:39.386772 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Oct 29 23:30:39.386780 kernel: devtmpfs: initialized Oct 29 23:30:39.386787 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 23:30:39.386795 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 23:30:39.386802 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 29 23:30:39.386810 kernel: 0 pages in range for non-PLT usage Oct 29 23:30:39.386818 kernel: 515056 pages in range for PLT usage Oct 29 23:30:39.386826 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 23:30:39.386833 kernel: SMBIOS 3.0.0 present. Oct 29 23:30:39.386840 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 29 23:30:39.386848 kernel: DMI: Memory slots populated: 1/1 Oct 29 23:30:39.386855 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 23:30:39.386863 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 29 23:30:39.386872 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 29 23:30:39.386879 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 29 23:30:39.386887 kernel: audit: initializing netlink subsys (disabled) Oct 29 23:30:39.386895 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Oct 29 23:30:39.386902 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 23:30:39.386910 kernel: cpuidle: using governor menu Oct 29 23:30:39.386917 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 29 23:30:39.386926 kernel: ASID allocator initialised with 32768 entries Oct 29 23:30:39.386934 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 23:30:39.386942 kernel: Serial: AMBA PL011 UART driver Oct 29 23:30:39.386955 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 23:30:39.386963 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 23:30:39.386971 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 29 23:30:39.386993 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 29 23:30:39.387003 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 23:30:39.387011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 23:30:39.387019 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 29 23:30:39.387026 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 29 23:30:39.387034 kernel: ACPI: Added _OSI(Module Device) Oct 29 23:30:39.387042 kernel: ACPI: Added _OSI(Processor Device) Oct 29 23:30:39.387049 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 23:30:39.387057 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 23:30:39.387066 kernel: ACPI: Interpreter enabled Oct 29 23:30:39.387073 kernel: ACPI: Using GIC for interrupt routing Oct 29 23:30:39.387081 kernel: ACPI: MCFG table detected, 1 entries Oct 29 23:30:39.387088 kernel: ACPI: CPU0 has been hot-added Oct 29 23:30:39.387096 kernel: ACPI: CPU1 has been hot-added Oct 29 23:30:39.387104 kernel: ACPI: CPU2 has been hot-added Oct 29 23:30:39.387111 kernel: ACPI: CPU3 has been hot-added Oct 29 23:30:39.387120 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 29 23:30:39.387128 kernel: printk: legacy console [ttyAMA0] enabled Oct 29 23:30:39.387136 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 23:30:39.387316 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 23:30:39.387405 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 29 23:30:39.387487 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 29 23:30:39.387569 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 29 23:30:39.387649 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 29 23:30:39.387659 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 29 23:30:39.387667 kernel: PCI host bridge to bus 0000:00 Oct 29 23:30:39.387753 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 29 23:30:39.387827 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 29 23:30:39.387903 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 29 23:30:39.387999 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 23:30:39.388104 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 29 23:30:39.388201 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 23:30:39.388292 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 29 23:30:39.388378 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 29 23:30:39.388458 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 29 23:30:39.388539 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 29 23:30:39.388623 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 29 23:30:39.388714 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 29 23:30:39.388791 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 29 23:30:39.388867 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 29 23:30:39.388942 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 29 23:30:39.388960 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 29 23:30:39.388968 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 29 23:30:39.388987 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 29 23:30:39.388995 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 29 23:30:39.389003 kernel: iommu: Default domain type: Translated Oct 29 23:30:39.389014 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 29 23:30:39.389021 kernel: efivars: Registered efivars operations Oct 29 23:30:39.389029 kernel: vgaarb: loaded Oct 29 23:30:39.389037 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 29 23:30:39.389044 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 23:30:39.389052 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 23:30:39.389059 kernel: pnp: PnP ACPI init Oct 29 23:30:39.389161 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 29 23:30:39.389173 kernel: pnp: PnP ACPI: found 1 devices Oct 29 23:30:39.389181 kernel: NET: Registered PF_INET protocol family Oct 29 23:30:39.389188 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 23:30:39.389196 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 23:30:39.389204 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 23:30:39.389212 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 23:30:39.389221 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 23:30:39.389229 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 23:30:39.389237 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 23:30:39.389244 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 23:30:39.389252 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 23:30:39.389260 kernel: PCI: CLS 0 bytes, default 64 Oct 29 23:30:39.389267 kernel: kvm [1]: HYP mode not available Oct 29 23:30:39.389276 kernel: Initialise system trusted keyrings Oct 29 23:30:39.389284 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 23:30:39.389291 kernel: Key type asymmetric registered Oct 29 23:30:39.389299 kernel: Asymmetric key parser 'x509' registered Oct 29 23:30:39.389306 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 29 23:30:39.389314 kernel: io scheduler mq-deadline registered Oct 29 23:30:39.389321 kernel: io scheduler kyber registered Oct 29 23:30:39.389331 kernel: io scheduler bfq registered Oct 29 23:30:39.389339 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 29 23:30:39.389346 kernel: ACPI: button: Power Button [PWRB] Oct 29 23:30:39.389354 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 29 23:30:39.389439 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 29 23:30:39.389450 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 23:30:39.389457 kernel: thunder_xcv, ver 1.0 Oct 29 23:30:39.389466 kernel: thunder_bgx, ver 1.0 Oct 29 23:30:39.389474 kernel: nicpf, ver 1.0 Oct 29 23:30:39.389481 kernel: nicvf, ver 1.0 Oct 29 23:30:39.389573 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 29 23:30:39.389651 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-29T23:30:38 UTC (1761780638) Oct 29 23:30:39.389661 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 29 23:30:39.389671 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 29 23:30:39.389678 kernel: watchdog: NMI not fully supported Oct 29 23:30:39.389686 kernel: watchdog: Hard watchdog permanently disabled Oct 29 23:30:39.389693 kernel: NET: Registered PF_INET6 protocol family Oct 29 23:30:39.389701 kernel: Segment Routing with IPv6 Oct 29 23:30:39.389709 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 23:30:39.389717 kernel: NET: Registered PF_PACKET protocol family Oct 29 23:30:39.389726 kernel: Key type dns_resolver registered Oct 29 23:30:39.389733 kernel: registered taskstats version 1 Oct 29 23:30:39.389746 kernel: Loading compiled-in X.509 certificates Oct 29 23:30:39.389756 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: c6256e7d9c20dfbb4deda09bbb20ce7eab6ed949' Oct 29 23:30:39.389763 kernel: Demotion targets for Node 0: null Oct 29 23:30:39.389771 kernel: Key type .fscrypt registered Oct 29 23:30:39.389779 kernel: Key type fscrypt-provisioning registered Oct 29 23:30:39.389787 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 23:30:39.389796 kernel: ima: Allocated hash algorithm: sha1 Oct 29 23:30:39.389804 kernel: ima: No architecture policies found Oct 29 23:30:39.389814 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 29 23:30:39.389822 kernel: clk: Disabling unused clocks Oct 29 23:30:39.389829 kernel: PM: genpd: Disabling unused power domains Oct 29 23:30:39.389837 kernel: Freeing unused kernel memory: 12992K Oct 29 23:30:39.389844 kernel: Run /init as init process Oct 29 23:30:39.389854 kernel: with arguments: Oct 29 23:30:39.389861 kernel: /init Oct 29 23:30:39.389869 kernel: with environment: Oct 29 23:30:39.389877 kernel: HOME=/ Oct 29 23:30:39.389885 kernel: TERM=linux Oct 29 23:30:39.390078 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 29 23:30:39.390200 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 29 23:30:39.390215 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 23:30:39.390223 kernel: GPT:16515071 != 27000831 Oct 29 23:30:39.390231 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 23:30:39.390238 kernel: GPT:16515071 != 27000831 Oct 29 23:30:39.390246 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 23:30:39.390253 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 23:30:39.390262 kernel: SCSI subsystem initialized Oct 29 23:30:39.390270 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 23:30:39.390278 kernel: device-mapper: uevent: version 1.0.3 Oct 29 23:30:39.390286 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 23:30:39.390293 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 29 23:30:39.390301 kernel: raid6: neonx8 gen() 15771 MB/s Oct 29 23:30:39.390308 kernel: raid6: neonx4 gen() 15558 MB/s Oct 29 23:30:39.390317 kernel: raid6: neonx2 gen() 13185 MB/s Oct 29 23:30:39.390325 kernel: raid6: neonx1 gen() 10413 MB/s Oct 29 23:30:39.390333 kernel: raid6: int64x8 gen() 6899 MB/s Oct 29 23:30:39.390340 kernel: raid6: int64x4 gen() 7309 MB/s Oct 29 23:30:39.390348 kernel: raid6: int64x2 gen() 6108 MB/s Oct 29 23:30:39.390355 kernel: raid6: int64x1 gen() 5044 MB/s Oct 29 23:30:39.390363 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Oct 29 23:30:39.390372 kernel: raid6: .... xor() 12049 MB/s, rmw enabled Oct 29 23:30:39.390380 kernel: raid6: using neon recovery algorithm Oct 29 23:30:39.390388 kernel: xor: measuring software checksum speed Oct 29 23:30:39.390396 kernel: 8regs : 21584 MB/sec Oct 29 23:30:39.390404 kernel: 32regs : 15339 MB/sec Oct 29 23:30:39.390411 kernel: arm64_neon : 25931 MB/sec Oct 29 23:30:39.390419 kernel: xor: using function: arm64_neon (25931 MB/sec) Oct 29 23:30:39.390428 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 23:30:39.390436 kernel: BTRFS: device fsid f84bda9c-c65c-4b2e-9db1-1debc07ad11f devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (205) Oct 29 23:30:39.390444 kernel: BTRFS info (device dm-0): first mount of filesystem f84bda9c-c65c-4b2e-9db1-1debc07ad11f Oct 29 23:30:39.390452 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:30:39.390459 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 23:30:39.390467 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 23:30:39.390475 kernel: loop: module loaded Oct 29 23:30:39.390484 kernel: loop0: detected capacity change from 0 to 91464 Oct 29 23:30:39.390491 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 23:30:39.390500 systemd[1]: Successfully made /usr/ read-only. Oct 29 23:30:39.390518 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:30:39.390526 systemd[1]: Detected virtualization kvm. Oct 29 23:30:39.390534 systemd[1]: Detected architecture arm64. Oct 29 23:30:39.390544 systemd[1]: Running in initrd. Oct 29 23:30:39.390552 systemd[1]: No hostname configured, using default hostname. Oct 29 23:30:39.390561 systemd[1]: Hostname set to . Oct 29 23:30:39.390570 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 23:30:39.390578 systemd[1]: Queued start job for default target initrd.target. Oct 29 23:30:39.390586 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 23:30:39.390594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:30:39.390604 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:30:39.390613 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 23:30:39.390622 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:30:39.390631 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 23:30:39.390639 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 23:30:39.390650 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:30:39.390659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:30:39.390668 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:30:39.390676 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:30:39.390684 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:30:39.390693 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:30:39.390701 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:30:39.390711 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:30:39.390719 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:30:39.390727 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 23:30:39.390736 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 23:30:39.390753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:30:39.390764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:30:39.390773 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:30:39.390781 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:30:39.390790 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 23:30:39.390798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 23:30:39.390807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:30:39.390815 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 23:30:39.390826 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 23:30:39.390834 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 23:30:39.390842 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:30:39.390851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:30:39.390859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:30:39.390869 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 23:30:39.390878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:30:39.390886 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 23:30:39.390915 systemd-journald[347]: Collecting audit messages is disabled. Oct 29 23:30:39.390937 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 23:30:39.390953 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 23:30:39.390964 systemd-journald[347]: Journal started Oct 29 23:30:39.390996 systemd-journald[347]: Runtime Journal (/run/log/journal/eeb14d0829a94a22b70396ddafd22a48) is 6M, max 48.5M, 42.4M free. Oct 29 23:30:39.393170 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:30:39.396289 systemd-modules-load[348]: Inserted module 'br_netfilter' Oct 29 23:30:39.397454 kernel: Bridge firewalling registered Oct 29 23:30:39.401002 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:30:39.403357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:30:39.406078 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:39.409767 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 23:30:39.412919 systemd-tmpfiles[364]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 23:30:39.413121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:30:39.417063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 23:30:39.420648 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:30:39.424593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:30:39.434692 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:30:39.438054 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:30:39.440146 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:30:39.443320 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 23:30:39.445841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:30:39.470469 dracut-cmdline[388]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3f6e690ee2ade37dd6082d1ad3b53d2d12b3a76b4644e8ca271364e3a8c31ac Oct 29 23:30:39.494463 systemd-resolved[389]: Positive Trust Anchors: Oct 29 23:30:39.494482 systemd-resolved[389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:30:39.494486 systemd-resolved[389]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 23:30:39.494517 systemd-resolved[389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:30:39.518106 systemd-resolved[389]: Defaulting to hostname 'linux'. Oct 29 23:30:39.519089 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:30:39.520532 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:30:39.555003 kernel: Loading iSCSI transport class v2.0-870. Oct 29 23:30:39.564025 kernel: iscsi: registered transport (tcp) Oct 29 23:30:39.577229 kernel: iscsi: registered transport (qla4xxx) Oct 29 23:30:39.577289 kernel: QLogic iSCSI HBA Driver Oct 29 23:30:39.598876 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:30:39.622889 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:30:39.625369 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:30:39.674447 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 23:30:39.677063 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 23:30:39.679942 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 23:30:39.714767 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:30:39.718788 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:30:39.748198 systemd-udevd[633]: Using default interface naming scheme 'v257'. Oct 29 23:30:39.756502 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:30:39.760970 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 23:30:39.784625 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:30:39.789097 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:30:39.791245 dracut-pre-trigger[702]: rd.md=0: removing MD RAID activation Oct 29 23:30:39.817846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:30:39.820703 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:30:39.843719 systemd-networkd[739]: lo: Link UP Oct 29 23:30:39.843729 systemd-networkd[739]: lo: Gained carrier Oct 29 23:30:39.845133 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:30:39.846672 systemd[1]: Reached target network.target - Network. Oct 29 23:30:39.878273 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:30:39.883764 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 23:30:39.945960 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 23:30:39.954783 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 23:30:39.966796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 23:30:39.973736 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 23:30:39.976340 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 23:30:39.981457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:30:39.981591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:39.983865 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:30:39.992730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:30:39.997803 systemd-networkd[739]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 23:30:39.997816 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 23:30:40.003320 disk-uuid[802]: Primary Header is updated. Oct 29 23:30:40.003320 disk-uuid[802]: Secondary Entries is updated. Oct 29 23:30:40.003320 disk-uuid[802]: Secondary Header is updated. Oct 29 23:30:39.998680 systemd-networkd[739]: eth0: Link UP Oct 29 23:30:39.998843 systemd-networkd[739]: eth0: Gained carrier Oct 29 23:30:39.998854 systemd-networkd[739]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 23:30:40.014484 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 23:30:40.016211 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 23:30:40.017667 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:30:40.019666 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:30:40.027366 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:30:40.034405 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 23:30:40.039117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:40.068055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:30:41.040043 disk-uuid[803]: Warning: The kernel is still using the old partition table. Oct 29 23:30:41.040043 disk-uuid[803]: The new table will be used at the next reboot or after you Oct 29 23:30:41.040043 disk-uuid[803]: run partprobe(8) or kpartx(8) Oct 29 23:30:41.040043 disk-uuid[803]: The operation has completed successfully. Oct 29 23:30:41.045064 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 23:30:41.045174 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 23:30:41.047895 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 23:30:41.080001 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (833) Oct 29 23:30:41.080067 kernel: BTRFS info (device vda6): first mount of filesystem 69a058e1-a2ca-4b98-8b6f-1187a84af986 Oct 29 23:30:41.082180 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:30:41.085002 kernel: BTRFS info (device vda6): turning on async discard Oct 29 23:30:41.085046 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 23:30:41.091000 kernel: BTRFS info (device vda6): last unmount of filesystem 69a058e1-a2ca-4b98-8b6f-1187a84af986 Oct 29 23:30:41.091669 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 23:30:41.093933 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 23:30:41.208203 ignition[852]: Ignition 2.22.0 Oct 29 23:30:41.208221 ignition[852]: Stage: fetch-offline Oct 29 23:30:41.208262 ignition[852]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:41.208272 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:30:41.208363 ignition[852]: parsed url from cmdline: "" Oct 29 23:30:41.208367 ignition[852]: no config URL provided Oct 29 23:30:41.208372 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 23:30:41.208380 ignition[852]: no config at "/usr/lib/ignition/user.ign" Oct 29 23:30:41.208422 ignition[852]: op(1): [started] loading QEMU firmware config module Oct 29 23:30:41.208426 ignition[852]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 23:30:41.219642 ignition[852]: op(1): [finished] loading QEMU firmware config module Oct 29 23:30:41.262408 ignition[852]: parsing config with SHA512: aed5ffdeb70ca7fee6db5e73376e9c227e9c4b97045edbd42ac6aa2807e4f94de5ac91186d672fb9fb94d064ebe70358becf50a80047965cadf0429d363b2615 Oct 29 23:30:41.268573 unknown[852]: fetched base config from "system" Oct 29 23:30:41.268587 unknown[852]: fetched user config from "qemu" Oct 29 23:30:41.269019 ignition[852]: fetch-offline: fetch-offline passed Oct 29 23:30:41.269085 ignition[852]: Ignition finished successfully Oct 29 23:30:41.271486 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:30:41.273776 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 23:30:41.274687 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 23:30:41.308259 ignition[864]: Ignition 2.22.0 Oct 29 23:30:41.308277 ignition[864]: Stage: kargs Oct 29 23:30:41.308428 ignition[864]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:41.308436 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:30:41.309403 ignition[864]: kargs: kargs passed Oct 29 23:30:41.312735 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 23:30:41.309456 ignition[864]: Ignition finished successfully Oct 29 23:30:41.315041 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 23:30:41.340725 ignition[872]: Ignition 2.22.0 Oct 29 23:30:41.340743 ignition[872]: Stage: disks Oct 29 23:30:41.340899 ignition[872]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:41.344227 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 23:30:41.340907 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:30:41.345508 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 23:30:41.341687 ignition[872]: disks: disks passed Oct 29 23:30:41.347463 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 23:30:41.341735 ignition[872]: Ignition finished successfully Oct 29 23:30:41.349763 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:30:41.351964 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:30:41.353648 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:30:41.356810 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 23:30:41.388056 systemd-fsck[883]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 29 23:30:41.393448 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 23:30:41.395999 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 23:30:41.459018 kernel: EXT4-fs (vda9): mounted filesystem 1648bfe3-fe28-4898-89ac-a64f076d042f r/w with ordered data mode. Quota mode: none. Oct 29 23:30:41.459664 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 23:30:41.461166 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 23:30:41.464000 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:30:41.465926 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 23:30:41.467149 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 23:30:41.467188 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 23:30:41.467247 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:30:41.477823 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 23:30:41.480583 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 23:30:41.485862 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Oct 29 23:30:41.485909 kernel: BTRFS info (device vda6): first mount of filesystem 69a058e1-a2ca-4b98-8b6f-1187a84af986 Oct 29 23:30:41.485928 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:30:41.490453 kernel: BTRFS info (device vda6): turning on async discard Oct 29 23:30:41.490516 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 23:30:41.491555 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:30:41.494116 systemd-networkd[739]: eth0: Gained IPv6LL Oct 29 23:30:41.525127 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 23:30:41.529587 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Oct 29 23:30:41.533303 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 23:30:41.537721 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 23:30:41.613574 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 23:30:41.616299 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 23:30:41.618330 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 23:30:41.642563 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 23:30:41.644308 kernel: BTRFS info (device vda6): last unmount of filesystem 69a058e1-a2ca-4b98-8b6f-1187a84af986 Oct 29 23:30:41.660121 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 23:30:41.676813 ignition[1005]: INFO : Ignition 2.22.0 Oct 29 23:30:41.676813 ignition[1005]: INFO : Stage: mount Oct 29 23:30:41.678614 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:41.678614 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:30:41.678614 ignition[1005]: INFO : mount: mount passed Oct 29 23:30:41.678614 ignition[1005]: INFO : Ignition finished successfully Oct 29 23:30:41.679847 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 23:30:41.683412 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 23:30:42.461177 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:30:42.482757 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Oct 29 23:30:42.482804 kernel: BTRFS info (device vda6): first mount of filesystem 69a058e1-a2ca-4b98-8b6f-1187a84af986 Oct 29 23:30:42.482815 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:30:42.486519 kernel: BTRFS info (device vda6): turning on async discard Oct 29 23:30:42.486540 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 23:30:42.488025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:30:42.519879 ignition[1035]: INFO : Ignition 2.22.0 Oct 29 23:30:42.519879 ignition[1035]: INFO : Stage: files Oct 29 23:30:42.521748 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:42.521748 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:30:42.521748 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Oct 29 23:30:42.525417 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 23:30:42.525417 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 23:30:42.528579 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 23:30:42.528579 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 23:30:42.531598 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 23:30:42.531598 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 29 23:30:42.531598 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 29 23:30:42.528711 unknown[1035]: wrote ssh authorized keys file for user: core Oct 29 23:30:42.702883 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 23:30:42.915385 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 29 23:30:42.915385 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 29 23:30:42.920021 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 29 23:30:42.942549 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 29 23:30:42.942549 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 29 23:30:43.292033 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 23:30:43.574375 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 29 23:30:43.574375 ignition[1035]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 23:30:43.579190 ignition[1035]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:30:43.579190 ignition[1035]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:30:43.579190 ignition[1035]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 23:30:43.579190 ignition[1035]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 23:30:43.579190 ignition[1035]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 23:30:43.579190 ignition[1035]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 23:30:43.579190 ignition[1035]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 23:30:43.579190 ignition[1035]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 23:30:43.596393 ignition[1035]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 23:30:43.600030 ignition[1035]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 23:30:43.603088 ignition[1035]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 23:30:43.603088 ignition[1035]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 23:30:43.603088 ignition[1035]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 23:30:43.603088 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:30:43.603088 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:30:43.603088 ignition[1035]: INFO : files: files passed Oct 29 23:30:43.603088 ignition[1035]: INFO : Ignition finished successfully Oct 29 23:30:43.603850 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 23:30:43.607254 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 23:30:43.611137 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 23:30:43.624484 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 23:30:43.624597 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 23:30:43.628529 initrd-setup-root-after-ignition[1065]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 23:30:43.630230 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:30:43.630230 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:30:43.633771 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:30:43.632635 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:30:43.635397 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 23:30:43.640160 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 23:30:43.674698 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 23:30:43.674816 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 23:30:43.677531 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 23:30:43.679748 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 23:30:43.682209 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 23:30:43.683182 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 23:30:43.717587 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:30:43.720598 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 23:30:43.738672 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 29 23:30:43.738896 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:30:43.740480 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:30:43.743008 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 23:30:43.745044 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 23:30:43.745196 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:30:43.748263 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 23:30:43.750302 systemd[1]: Stopped target basic.target - Basic System. Oct 29 23:30:43.752402 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 23:30:43.754603 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:30:43.756905 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 23:30:43.759336 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:30:43.761496 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 23:30:43.763669 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:30:43.766211 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 23:30:43.768370 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 23:30:43.770513 systemd[1]: Stopped target swap.target - Swaps. Oct 29 23:30:43.772278 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 23:30:43.772420 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:30:43.774941 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:30:43.777129 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:30:43.779381 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 23:30:43.780116 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:30:43.781914 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 23:30:43.782099 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 23:30:43.785017 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 23:30:43.785158 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:30:43.787793 systemd[1]: Stopped target paths.target - Path Units. Oct 29 23:30:43.789688 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 23:30:43.793050 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:30:43.795294 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 23:30:43.797218 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 23:30:43.799598 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 23:30:43.799696 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:30:43.801455 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 23:30:43.801540 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:30:43.803316 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 23:30:43.803441 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:30:43.805318 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 23:30:43.805433 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 23:30:43.808096 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 23:30:43.809947 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 23:30:43.810114 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:30:43.835671 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 23:30:43.836703 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 23:30:43.836868 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:30:43.839232 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 23:30:43.839365 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:30:43.841594 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 23:30:43.841731 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:30:43.849239 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 23:30:43.851020 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 23:30:43.853436 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 23:30:43.859706 ignition[1092]: INFO : Ignition 2.22.0 Oct 29 23:30:43.859706 ignition[1092]: INFO : Stage: umount Oct 29 23:30:43.863704 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:43.863704 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:30:43.863704 ignition[1092]: INFO : umount: umount passed Oct 29 23:30:43.863704 ignition[1092]: INFO : Ignition finished successfully Oct 29 23:30:43.865283 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 23:30:43.865390 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 23:30:43.867712 systemd[1]: Stopped target network.target - Network. Oct 29 23:30:43.868837 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 23:30:43.868918 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 23:30:43.870796 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 23:30:43.870866 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 23:30:43.873675 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 23:30:43.873737 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 23:30:43.875911 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 23:30:43.875998 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 23:30:43.878153 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 23:30:43.880024 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 23:30:43.893864 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 23:30:43.894021 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 23:30:43.902264 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 23:30:43.902373 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 23:30:43.906351 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 23:30:43.907622 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 23:30:43.910387 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 23:30:43.911716 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 23:30:43.911761 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:30:43.914066 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 23:30:43.914135 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 23:30:43.917124 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 23:30:43.918102 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 23:30:43.918179 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:30:43.920419 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 23:30:43.920477 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:30:43.922483 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 23:30:43.922537 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 23:30:43.924555 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:30:43.941391 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 23:30:43.941574 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:30:43.944176 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 23:30:43.944220 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 23:30:43.946300 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 23:30:43.946335 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:30:43.948241 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 23:30:43.948313 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:30:43.951326 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 23:30:43.951387 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 23:30:43.954380 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 23:30:43.954439 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:30:43.964695 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 23:30:43.966036 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 23:30:43.966118 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:30:43.968657 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 23:30:43.968718 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:30:43.971144 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:30:43.971216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:43.974414 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 23:30:43.975088 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 23:30:43.978451 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 23:30:43.979695 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 23:30:43.982805 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 23:30:43.985317 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 23:30:44.007416 systemd[1]: Switching root. Oct 29 23:30:44.048648 systemd-journald[347]: Journal stopped Oct 29 23:30:44.974073 systemd-journald[347]: Received SIGTERM from PID 1 (systemd). Oct 29 23:30:44.974128 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 23:30:44.974145 kernel: SELinux: policy capability open_perms=1 Oct 29 23:30:44.974156 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 23:30:44.974178 kernel: SELinux: policy capability always_check_network=0 Oct 29 23:30:44.974190 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 23:30:44.974208 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 23:30:44.974218 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 23:30:44.974235 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 23:30:44.974245 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 23:30:44.974256 kernel: audit: type=1403 audit(1761780644.309:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 23:30:44.974268 systemd[1]: Successfully loaded SELinux policy in 64.869ms. Oct 29 23:30:44.974281 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.723ms. Oct 29 23:30:44.974312 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:30:44.974326 systemd[1]: Detected virtualization kvm. Oct 29 23:30:44.974337 systemd[1]: Detected architecture arm64. Oct 29 23:30:44.974348 systemd[1]: Detected first boot. Oct 29 23:30:44.974358 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 29 23:30:44.974369 kernel: hrtimer: interrupt took 5166560 ns Oct 29 23:30:44.974380 zram_generator::config[1139]: No configuration found. Oct 29 23:30:44.974393 kernel: NET: Registered PF_VSOCK protocol family Oct 29 23:30:44.974404 systemd[1]: Populated /etc with preset unit settings. Oct 29 23:30:44.974417 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 23:30:44.974434 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 23:30:44.974444 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 23:30:44.974455 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 23:30:44.974468 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 23:30:44.974479 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 23:30:44.974489 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 23:30:44.974500 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 23:30:44.974511 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 23:30:44.974521 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 23:30:44.974532 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 23:30:44.974544 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:30:44.974555 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:30:44.974566 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 23:30:44.974577 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 23:30:44.974588 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 23:30:44.974599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:30:44.974613 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 29 23:30:44.974626 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:30:44.974637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:30:44.974647 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 23:30:44.974658 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 23:30:44.974668 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 23:30:44.974679 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 23:30:44.974690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:30:44.974701 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:30:44.974711 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:30:44.974721 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:30:44.974732 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 23:30:44.974743 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 23:30:44.974754 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 23:30:44.974766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:30:44.974777 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:30:44.974787 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:30:44.974798 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 23:30:44.974808 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 23:30:44.974819 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 23:30:44.974829 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 23:30:44.974840 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 23:30:44.974855 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 23:30:44.974866 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 23:30:44.974876 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 23:30:44.974888 systemd[1]: Reached target machines.target - Containers. Oct 29 23:30:44.974899 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 23:30:44.974910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:30:44.974921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:30:44.974938 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 23:30:44.974950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:30:44.974961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:30:44.974971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:30:44.975053 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 23:30:44.975066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:30:44.975080 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 23:30:44.975090 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 23:30:44.975101 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 23:30:44.975112 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 23:30:44.975122 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 23:30:44.975133 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:30:44.975146 kernel: fuse: init (API version 7.41) Oct 29 23:30:44.975158 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:30:44.975169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:30:44.975179 kernel: ACPI: bus type drm_connector registered Oct 29 23:30:44.975189 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:30:44.975200 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 23:30:44.975210 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 23:30:44.975221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:30:44.975233 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 23:30:44.975244 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 23:30:44.975254 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 23:30:44.975283 systemd-journald[1214]: Collecting audit messages is disabled. Oct 29 23:30:44.975310 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 23:30:44.975321 systemd-journald[1214]: Journal started Oct 29 23:30:44.975344 systemd-journald[1214]: Runtime Journal (/run/log/journal/eeb14d0829a94a22b70396ddafd22a48) is 6M, max 48.5M, 42.4M free. Oct 29 23:30:44.710080 systemd[1]: Queued start job for default target multi-user.target. Oct 29 23:30:44.730231 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 23:30:44.730685 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 23:30:44.978293 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:30:44.979246 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 23:30:44.980699 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 23:30:44.984034 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 23:30:44.985653 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:30:44.987406 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 23:30:44.987595 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 23:30:44.989332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:30:44.989490 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:30:44.991066 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:30:44.991227 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:30:44.992686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:30:44.992864 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:30:44.994722 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 23:30:44.994877 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 23:30:44.996399 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:30:44.996562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:30:44.998232 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:30:45.000086 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:30:45.002444 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 23:30:45.004247 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 23:30:45.016847 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:30:45.018508 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 29 23:30:45.021043 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 23:30:45.023106 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 23:30:45.024341 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 23:30:45.024379 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:30:45.026288 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 23:30:45.027845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:30:45.038386 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 23:30:45.040730 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 23:30:45.042219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:30:45.043282 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 23:30:45.044904 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:30:45.046771 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:30:45.052056 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 23:30:45.054031 systemd-journald[1214]: Time spent on flushing to /var/log/journal/eeb14d0829a94a22b70396ddafd22a48 is 18.258ms for 870 entries. Oct 29 23:30:45.054031 systemd-journald[1214]: System Journal (/var/log/journal/eeb14d0829a94a22b70396ddafd22a48) is 8M, max 163.5M, 155.5M free. Oct 29 23:30:45.083473 systemd-journald[1214]: Received client request to flush runtime journal. Oct 29 23:30:45.083530 kernel: loop1: detected capacity change from 0 to 119344 Oct 29 23:30:45.054571 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 23:30:45.057608 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:30:45.059938 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 23:30:45.061534 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 23:30:45.064472 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 23:30:45.068782 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 23:30:45.072072 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 23:30:45.076781 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:30:45.086939 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 23:30:45.099030 kernel: loop2: detected capacity change from 0 to 100624 Oct 29 23:30:45.098964 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 23:30:45.105138 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:30:45.108132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:30:45.109890 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 23:30:45.120606 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 23:30:45.127051 kernel: loop3: detected capacity change from 0 to 200800 Oct 29 23:30:45.130011 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Oct 29 23:30:45.130295 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Oct 29 23:30:45.135323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:30:45.163446 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 23:30:45.169993 kernel: loop4: detected capacity change from 0 to 119344 Oct 29 23:30:45.175058 kernel: loop5: detected capacity change from 0 to 100624 Oct 29 23:30:45.181150 kernel: loop6: detected capacity change from 0 to 200800 Oct 29 23:30:45.185467 (sd-merge)[1282]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 29 23:30:45.188549 (sd-merge)[1282]: Merged extensions into '/usr'. Oct 29 23:30:45.192226 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 23:30:45.192250 systemd[1]: Reloading... Oct 29 23:30:45.217393 systemd-resolved[1270]: Positive Trust Anchors: Oct 29 23:30:45.217415 systemd-resolved[1270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:30:45.217419 systemd-resolved[1270]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 29 23:30:45.217450 systemd-resolved[1270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:30:45.225958 systemd-resolved[1270]: Defaulting to hostname 'linux'. Oct 29 23:30:45.245403 zram_generator::config[1314]: No configuration found. Oct 29 23:30:45.379474 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 23:30:45.379756 systemd[1]: Reloading finished in 187 ms. Oct 29 23:30:45.415164 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:30:45.416713 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 23:30:45.420213 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:30:45.439221 systemd[1]: Starting ensure-sysext.service... Oct 29 23:30:45.441197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:30:45.450213 systemd[1]: Reload requested from client PID 1346 ('systemctl') (unit ensure-sysext.service)... Oct 29 23:30:45.450228 systemd[1]: Reloading... Oct 29 23:30:45.457118 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 23:30:45.457151 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 23:30:45.457356 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 23:30:45.457825 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 23:30:45.458522 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 23:30:45.458717 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Oct 29 23:30:45.458765 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Oct 29 23:30:45.465497 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:30:45.465510 systemd-tmpfiles[1347]: Skipping /boot Oct 29 23:30:45.471732 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:30:45.471747 systemd-tmpfiles[1347]: Skipping /boot Oct 29 23:30:45.505999 zram_generator::config[1382]: No configuration found. Oct 29 23:30:45.631077 systemd[1]: Reloading finished in 180 ms. Oct 29 23:30:45.656703 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 23:30:45.673740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:30:45.679992 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:30:45.682203 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 23:30:45.684760 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 23:30:45.695082 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 23:30:45.698119 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:30:45.706148 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 23:30:45.711817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:30:45.723275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:30:45.727331 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:30:45.732388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:30:45.733763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:30:45.733943 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:30:45.738362 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 23:30:45.744765 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:30:45.749529 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:30:45.752270 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:30:45.752461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:30:45.754648 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:30:45.754846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:30:45.755482 systemd-udevd[1417]: Using default interface naming scheme 'v257'. Oct 29 23:30:45.762233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:30:45.764441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:30:45.770351 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:30:45.773432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:30:45.774879 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:30:45.775116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:30:45.777284 augenrules[1448]: No rules Oct 29 23:30:45.780085 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 23:30:45.783084 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:30:45.783305 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:30:45.785784 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:30:45.786097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:30:45.788240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:30:45.788450 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:30:45.790513 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:30:45.790742 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:30:45.795041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:30:45.797796 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 23:30:45.809257 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:30:45.810618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:30:45.812549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:30:45.815217 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:30:45.822142 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:30:45.826297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:30:45.827675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:30:45.827805 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:30:45.830656 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:30:45.831892 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 23:30:45.834888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:30:45.836061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:30:45.837937 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:30:45.846331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:30:45.848467 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:30:45.848636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:30:45.850590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:30:45.850783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:30:45.851627 augenrules[1478]: /sbin/augenrules: No change Oct 29 23:30:45.858140 systemd[1]: Finished ensure-sysext.service. Oct 29 23:30:45.869814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:30:45.869943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:30:45.872671 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 23:30:45.873789 augenrules[1507]: No rules Oct 29 23:30:45.874284 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:30:45.874490 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:30:45.911870 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 29 23:30:45.928996 systemd-networkd[1489]: lo: Link UP Oct 29 23:30:45.929005 systemd-networkd[1489]: lo: Gained carrier Oct 29 23:30:45.929901 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:30:45.930348 systemd-networkd[1489]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 23:30:45.930360 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 23:30:45.931284 systemd-networkd[1489]: eth0: Link UP Oct 29 23:30:45.931432 systemd-networkd[1489]: eth0: Gained carrier Oct 29 23:30:45.931453 systemd-networkd[1489]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 29 23:30:45.931496 systemd[1]: Reached target network.target - Network. Oct 29 23:30:45.935412 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 23:30:45.939358 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 23:30:45.952112 systemd-networkd[1489]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 23:30:45.959111 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 23:30:45.961126 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 23:30:45.962332 systemd-timesyncd[1511]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 23:30:45.962392 systemd-timesyncd[1511]: Initial clock synchronization to Wed 2025-10-29 23:30:46.063305 UTC. Oct 29 23:30:45.965995 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 23:30:45.995473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 23:30:45.999856 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 23:30:46.028201 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 23:30:46.044429 ldconfig[1415]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 23:30:46.052048 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 23:30:46.056265 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 23:30:46.063061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:30:46.078937 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 23:30:46.122651 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:46.125687 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:30:46.127157 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 23:30:46.128632 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 23:30:46.130336 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 23:30:46.131949 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 23:30:46.133428 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 23:30:46.134936 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 23:30:46.135007 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:30:46.136042 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:30:46.137976 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 23:30:46.140739 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 23:30:46.143884 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 23:30:46.145730 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 23:30:46.147318 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 23:30:46.152119 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 23:30:46.153752 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 23:30:46.155839 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 23:30:46.157214 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:30:46.158420 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:30:46.159620 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:30:46.159657 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:30:46.160876 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 23:30:46.163331 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 23:30:46.166155 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 23:30:46.168472 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 23:30:46.170673 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 23:30:46.172134 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 23:30:46.173267 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 23:30:46.177434 jq[1561]: false Oct 29 23:30:46.177677 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 23:30:46.180128 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 23:30:46.183757 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 23:30:46.183869 extend-filesystems[1562]: Found /dev/vda6 Oct 29 23:30:46.188058 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 23:30:46.188306 extend-filesystems[1562]: Found /dev/vda9 Oct 29 23:30:46.189436 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 23:30:46.189896 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 23:30:46.191304 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 23:30:46.193442 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 23:30:46.194825 extend-filesystems[1562]: Checking size of /dev/vda9 Oct 29 23:30:46.198189 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 23:30:46.200208 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 23:30:46.200411 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 23:30:46.206080 jq[1580]: true Oct 29 23:30:46.202576 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 23:30:46.202764 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 23:30:46.207337 extend-filesystems[1562]: Resized partition /dev/vda9 Oct 29 23:30:46.210455 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 23:30:46.216230 extend-filesystems[1594]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 23:30:46.218522 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 23:30:46.227188 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 29 23:30:46.232006 jq[1587]: true Oct 29 23:30:46.232830 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 23:30:46.242002 update_engine[1578]: I20251029 23:30:46.239717 1578 main.cc:92] Flatcar Update Engine starting Oct 29 23:30:46.256140 tar[1584]: linux-arm64/LICENSE Oct 29 23:30:46.256140 tar[1584]: linux-arm64/helm Oct 29 23:30:46.262565 dbus-daemon[1559]: [system] SELinux support is enabled Oct 29 23:30:46.262825 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 23:30:46.266163 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 29 23:30:46.267046 update_engine[1578]: I20251029 23:30:46.266824 1578 update_check_scheduler.cc:74] Next update check in 11m14s Oct 29 23:30:46.271798 systemd[1]: Started update-engine.service - Update Engine. Oct 29 23:30:46.274817 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 23:30:46.274857 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 23:30:46.276953 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 23:30:46.276973 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 23:30:46.281164 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 23:30:46.291494 extend-filesystems[1594]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 23:30:46.291494 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 23:30:46.291494 extend-filesystems[1594]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 29 23:30:46.300667 extend-filesystems[1562]: Resized filesystem in /dev/vda9 Oct 29 23:30:46.293329 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 23:30:46.294322 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 23:30:46.313350 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Oct 29 23:30:46.320186 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 23:30:46.322047 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 23:30:46.333978 locksmithd[1622]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 23:30:46.349891 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (Power Button) Oct 29 23:30:46.350158 systemd-logind[1573]: New seat seat0. Oct 29 23:30:46.351558 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 23:30:46.403783 containerd[1597]: time="2025-10-29T23:30:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 23:30:46.404533 containerd[1597]: time="2025-10-29T23:30:46.404492333Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 23:30:46.417247 containerd[1597]: time="2025-10-29T23:30:46.417131688Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.737µs" Oct 29 23:30:46.417247 containerd[1597]: time="2025-10-29T23:30:46.417179412Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 23:30:46.417247 containerd[1597]: time="2025-10-29T23:30:46.417248405Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 23:30:46.417681 containerd[1597]: time="2025-10-29T23:30:46.417606498Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 23:30:46.417775 containerd[1597]: time="2025-10-29T23:30:46.417720380Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 23:30:46.417822 containerd[1597]: time="2025-10-29T23:30:46.417805578Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:30:46.417999 containerd[1597]: time="2025-10-29T23:30:46.417970222Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:30:46.418074 containerd[1597]: time="2025-10-29T23:30:46.418040836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:30:46.418360 containerd[1597]: time="2025-10-29T23:30:46.418334838Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:30:46.418360 containerd[1597]: time="2025-10-29T23:30:46.418357241Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:30:46.418413 containerd[1597]: time="2025-10-29T23:30:46.418370692Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:30:46.418413 containerd[1597]: time="2025-10-29T23:30:46.418379362Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 23:30:46.418472 containerd[1597]: time="2025-10-29T23:30:46.418454918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 23:30:46.418676 containerd[1597]: time="2025-10-29T23:30:46.418655943Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:30:46.418707 containerd[1597]: time="2025-10-29T23:30:46.418690297Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:30:46.418727 containerd[1597]: time="2025-10-29T23:30:46.418701965Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 23:30:46.418751 containerd[1597]: time="2025-10-29T23:30:46.418741587Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 23:30:46.420996 containerd[1597]: time="2025-10-29T23:30:46.419653612Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 23:30:46.420996 containerd[1597]: time="2025-10-29T23:30:46.419971880Z" level=info msg="metadata content store policy set" policy=shared Oct 29 23:30:46.427664 containerd[1597]: time="2025-10-29T23:30:46.427619568Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 23:30:46.427714 containerd[1597]: time="2025-10-29T23:30:46.427701728Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 23:30:46.427763 containerd[1597]: time="2025-10-29T23:30:46.427718582Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 23:30:46.427763 containerd[1597]: time="2025-10-29T23:30:46.427731910Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 23:30:46.427763 containerd[1597]: time="2025-10-29T23:30:46.427745118Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 23:30:46.427763 containerd[1597]: time="2025-10-29T23:30:46.427755975Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 23:30:46.427843 containerd[1597]: time="2025-10-29T23:30:46.427768210Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 23:30:46.427843 containerd[1597]: time="2025-10-29T23:30:46.427796447Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 23:30:46.427843 containerd[1597]: time="2025-10-29T23:30:46.427809330Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 23:30:46.427843 containerd[1597]: time="2025-10-29T23:30:46.427819459Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 23:30:46.427843 containerd[1597]: time="2025-10-29T23:30:46.427828777Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 23:30:46.427843 containerd[1597]: time="2025-10-29T23:30:46.427842227Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 23:30:46.428051 containerd[1597]: time="2025-10-29T23:30:46.428030044Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 23:30:46.428082 containerd[1597]: time="2025-10-29T23:30:46.428057471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 23:30:46.428082 containerd[1597]: time="2025-10-29T23:30:46.428074122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 23:30:46.428160 containerd[1597]: time="2025-10-29T23:30:46.428086560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 23:30:46.428160 containerd[1597]: time="2025-10-29T23:30:46.428106938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 23:30:46.428160 containerd[1597]: time="2025-10-29T23:30:46.428120307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 23:30:46.428160 containerd[1597]: time="2025-10-29T23:30:46.428134567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 23:30:46.428160 containerd[1597]: time="2025-10-29T23:30:46.428145222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 23:30:46.428160 containerd[1597]: time="2025-10-29T23:30:46.428157498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 23:30:46.428260 containerd[1597]: time="2025-10-29T23:30:46.428168274Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 23:30:46.428260 containerd[1597]: time="2025-10-29T23:30:46.428185695Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 23:30:46.428413 containerd[1597]: time="2025-10-29T23:30:46.428391824Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 23:30:46.428413 containerd[1597]: time="2025-10-29T23:30:46.428412242Z" level=info msg="Start snapshots syncer" Oct 29 23:30:46.428478 containerd[1597]: time="2025-10-29T23:30:46.428439305Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 23:30:46.428707 containerd[1597]: time="2025-10-29T23:30:46.428665691Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 23:30:46.428832 containerd[1597]: time="2025-10-29T23:30:46.428725893Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 23:30:46.428832 containerd[1597]: time="2025-10-29T23:30:46.428821584Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 23:30:46.428964 containerd[1597]: time="2025-10-29T23:30:46.428938869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 23:30:46.429019 containerd[1597]: time="2025-10-29T23:30:46.428968200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 23:30:46.429019 containerd[1597]: time="2025-10-29T23:30:46.428981286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 23:30:46.429019 containerd[1597]: time="2025-10-29T23:30:46.429011549Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 23:30:46.429071 containerd[1597]: time="2025-10-29T23:30:46.429025242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 23:30:46.429071 containerd[1597]: time="2025-10-29T23:30:46.429036910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 23:30:46.429071 containerd[1597]: time="2025-10-29T23:30:46.429048618Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 23:30:46.429118 containerd[1597]: time="2025-10-29T23:30:46.429073169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 23:30:46.429118 containerd[1597]: time="2025-10-29T23:30:46.429085971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 23:30:46.429118 containerd[1597]: time="2025-10-29T23:30:46.429097760Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 23:30:46.429169 containerd[1597]: time="2025-10-29T23:30:46.429131426Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:30:46.429169 containerd[1597]: time="2025-10-29T23:30:46.429147267Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:30:46.429169 containerd[1597]: time="2025-10-29T23:30:46.429156098Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:30:46.429169 containerd[1597]: time="2025-10-29T23:30:46.429166389Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:30:46.429236 containerd[1597]: time="2025-10-29T23:30:46.429175018Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 23:30:46.429236 containerd[1597]: time="2025-10-29T23:30:46.429184903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 23:30:46.429236 containerd[1597]: time="2025-10-29T23:30:46.429198434Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 23:30:46.429284 containerd[1597]: time="2025-10-29T23:30:46.429278609Z" level=info msg="runtime interface created" Oct 29 23:30:46.429305 containerd[1597]: time="2025-10-29T23:30:46.429284200Z" level=info msg="created NRI interface" Oct 29 23:30:46.429305 containerd[1597]: time="2025-10-29T23:30:46.429294571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 23:30:46.429338 containerd[1597]: time="2025-10-29T23:30:46.429307292Z" level=info msg="Connect containerd service" Oct 29 23:30:46.429370 containerd[1597]: time="2025-10-29T23:30:46.429339095Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 23:30:46.432448 containerd[1597]: time="2025-10-29T23:30:46.430122005Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 23:30:46.504642 containerd[1597]: time="2025-10-29T23:30:46.504575634Z" level=info msg="Start subscribing containerd event" Oct 29 23:30:46.504753 containerd[1597]: time="2025-10-29T23:30:46.504705882Z" level=info msg="Start recovering state" Oct 29 23:30:46.504913 containerd[1597]: time="2025-10-29T23:30:46.504893902Z" level=info msg="Start event monitor" Oct 29 23:30:46.504991 containerd[1597]: time="2025-10-29T23:30:46.504956859Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 23:30:46.505056 containerd[1597]: time="2025-10-29T23:30:46.505037966Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 23:30:46.505177 containerd[1597]: time="2025-10-29T23:30:46.504971120Z" level=info msg="Start cni network conf syncer for default" Oct 29 23:30:46.505290 containerd[1597]: time="2025-10-29T23:30:46.505269659Z" level=info msg="Start streaming server" Oct 29 23:30:46.505337 containerd[1597]: time="2025-10-29T23:30:46.505293723Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 23:30:46.505337 containerd[1597]: time="2025-10-29T23:30:46.505312440Z" level=info msg="runtime interface starting up..." Oct 29 23:30:46.505400 containerd[1597]: time="2025-10-29T23:30:46.505382933Z" level=info msg="starting plugins..." Oct 29 23:30:46.505429 containerd[1597]: time="2025-10-29T23:30:46.505413844Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 23:30:46.505972 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 23:30:46.507078 containerd[1597]: time="2025-10-29T23:30:46.507045579Z" level=info msg="containerd successfully booted in 0.103621s" Oct 29 23:30:46.575653 tar[1584]: linux-arm64/README.md Oct 29 23:30:46.603601 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 23:30:46.966578 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 23:30:46.991264 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 23:30:46.995856 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 23:30:47.025275 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 23:30:47.025539 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 23:30:47.028731 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 23:30:47.051766 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 23:30:47.054970 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 23:30:47.059209 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 29 23:30:47.060763 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 23:30:47.829443 systemd-networkd[1489]: eth0: Gained IPv6LL Oct 29 23:30:47.831979 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 23:30:47.833909 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 23:30:47.836575 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 23:30:47.839451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:30:47.853006 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 23:30:47.869493 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 23:30:47.870432 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 23:30:47.872231 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 23:30:47.878011 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 23:30:48.408709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:30:48.410548 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 23:30:48.413170 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:30:48.416778 systemd[1]: Startup finished in 1.175s (kernel) + 5.190s (initrd) + 4.172s (userspace) = 10.538s. Oct 29 23:30:48.745568 kubelet[1703]: E1029 23:30:48.745446 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:30:48.747964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:30:48.748138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:30:48.754269 systemd[1]: kubelet.service: Consumed 692ms CPU time, 247.4M memory peak. Oct 29 23:30:50.453775 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 23:30:50.454972 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:47964.service - OpenSSH per-connection server daemon (10.0.0.1:47964). Oct 29 23:30:50.533749 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 47964 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:30:50.535704 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:50.542269 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 23:30:50.543288 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 23:30:50.548475 systemd-logind[1573]: New session 1 of user core. Oct 29 23:30:50.564079 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 23:30:50.566592 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 23:30:50.582220 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 23:30:50.584793 systemd-logind[1573]: New session c1 of user core. Oct 29 23:30:50.700813 systemd[1721]: Queued start job for default target default.target. Oct 29 23:30:50.716210 systemd[1721]: Created slice app.slice - User Application Slice. Oct 29 23:30:50.716242 systemd[1721]: Reached target paths.target - Paths. Oct 29 23:30:50.716281 systemd[1721]: Reached target timers.target - Timers. Oct 29 23:30:50.717858 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 23:30:50.728328 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 23:30:50.728458 systemd[1721]: Reached target sockets.target - Sockets. Oct 29 23:30:50.728514 systemd[1721]: Reached target basic.target - Basic System. Oct 29 23:30:50.728542 systemd[1721]: Reached target default.target - Main User Target. Oct 29 23:30:50.728569 systemd[1721]: Startup finished in 137ms. Oct 29 23:30:50.729189 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 23:30:50.730692 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 23:30:50.797303 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:47972.service - OpenSSH per-connection server daemon (10.0.0.1:47972). Oct 29 23:30:50.856845 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 47972 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:30:50.858211 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:50.862298 systemd-logind[1573]: New session 2 of user core. Oct 29 23:30:50.877158 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 23:30:50.930542 sshd[1735]: Connection closed by 10.0.0.1 port 47972 Oct 29 23:30:50.930878 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:50.945064 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:47972.service: Deactivated successfully. Oct 29 23:30:50.946846 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 23:30:50.949627 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Oct 29 23:30:50.951946 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:47986.service - OpenSSH per-connection server daemon (10.0.0.1:47986). Oct 29 23:30:50.952600 systemd-logind[1573]: Removed session 2. Oct 29 23:30:51.006527 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 47986 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:30:51.008185 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:51.012737 systemd-logind[1573]: New session 3 of user core. Oct 29 23:30:51.021180 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 23:30:51.070892 sshd[1744]: Connection closed by 10.0.0.1 port 47986 Oct 29 23:30:51.071375 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:51.082215 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:47986.service: Deactivated successfully. Oct 29 23:30:51.084602 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 23:30:51.085493 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Oct 29 23:30:51.088194 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:47988.service - OpenSSH per-connection server daemon (10.0.0.1:47988). Oct 29 23:30:51.088690 systemd-logind[1573]: Removed session 3. Oct 29 23:30:51.140840 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 47988 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:30:51.142210 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:51.146223 systemd-logind[1573]: New session 4 of user core. Oct 29 23:30:51.159188 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 23:30:51.212028 sshd[1754]: Connection closed by 10.0.0.1 port 47988 Oct 29 23:30:51.212149 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:51.233204 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:47988.service: Deactivated successfully. Oct 29 23:30:51.234850 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 23:30:51.235640 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Oct 29 23:30:51.239083 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:47990.service - OpenSSH per-connection server daemon (10.0.0.1:47990). Oct 29 23:30:51.239698 systemd-logind[1573]: Removed session 4. Oct 29 23:30:51.296945 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 47990 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:30:51.298331 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:51.303424 systemd-logind[1573]: New session 5 of user core. Oct 29 23:30:51.317188 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 23:30:51.377003 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 23:30:51.377278 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:30:51.392969 sudo[1764]: pam_unix(sudo:session): session closed for user root Oct 29 23:30:51.394933 sshd[1763]: Connection closed by 10.0.0.1 port 47990 Oct 29 23:30:51.395704 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:51.409190 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:47990.service: Deactivated successfully. Oct 29 23:30:51.411581 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 23:30:51.412498 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Oct 29 23:30:51.415174 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:48004.service - OpenSSH per-connection server daemon (10.0.0.1:48004). Oct 29 23:30:51.415626 systemd-logind[1573]: Removed session 5. Oct 29 23:30:51.474502 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 48004 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:30:51.475793 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:51.479830 systemd-logind[1573]: New session 6 of user core. Oct 29 23:30:51.492126 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 23:30:51.543620 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 23:30:51.543887 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:30:51.548565 sudo[1775]: pam_unix(sudo:session): session closed for user root Oct 29 23:30:51.554442 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 23:30:51.554906 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:30:51.563213 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:30:51.606107 augenrules[1797]: No rules Oct 29 23:30:51.606725 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:30:51.606915 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:30:51.608107 sudo[1774]: pam_unix(sudo:session): session closed for user root Oct 29 23:30:51.609462 sshd[1773]: Connection closed by 10.0.0.1 port 48004 Oct 29 23:30:51.609765 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:51.624358 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:48004.service: Deactivated successfully. Oct 29 23:30:51.626236 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 23:30:51.629142 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Oct 29 23:30:51.630911 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:48020.service - OpenSSH per-connection server daemon (10.0.0.1:48020). Oct 29 23:30:51.631563 systemd-logind[1573]: Removed session 6. Oct 29 23:30:51.684176 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 48020 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:30:51.685401 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:51.689375 systemd-logind[1573]: New session 7 of user core. Oct 29 23:30:51.703156 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 23:30:51.754891 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 23:30:51.755180 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:30:52.032844 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 23:30:52.045267 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 23:30:52.244951 dockerd[1833]: time="2025-10-29T23:30:52.244886692Z" level=info msg="Starting up" Oct 29 23:30:52.245679 dockerd[1833]: time="2025-10-29T23:30:52.245658950Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 23:30:52.256216 dockerd[1833]: time="2025-10-29T23:30:52.256182998Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 23:30:52.422132 dockerd[1833]: time="2025-10-29T23:30:52.422030608Z" level=info msg="Loading containers: start." Oct 29 23:30:52.432014 kernel: Initializing XFRM netlink socket Oct 29 23:30:52.633130 systemd-networkd[1489]: docker0: Link UP Oct 29 23:30:52.637027 dockerd[1833]: time="2025-10-29T23:30:52.636968301Z" level=info msg="Loading containers: done." Oct 29 23:30:52.648303 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2723620004-merged.mount: Deactivated successfully. Oct 29 23:30:52.651487 dockerd[1833]: time="2025-10-29T23:30:52.651441220Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 23:30:52.651681 dockerd[1833]: time="2025-10-29T23:30:52.651650618Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 23:30:52.651786 dockerd[1833]: time="2025-10-29T23:30:52.651768935Z" level=info msg="Initializing buildkit" Oct 29 23:30:52.674789 dockerd[1833]: time="2025-10-29T23:30:52.674684515Z" level=info msg="Completed buildkit initialization" Oct 29 23:30:52.681362 dockerd[1833]: time="2025-10-29T23:30:52.681314082Z" level=info msg="Daemon has completed initialization" Oct 29 23:30:52.681610 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 23:30:52.681963 dockerd[1833]: time="2025-10-29T23:30:52.681383077Z" level=info msg="API listen on /run/docker.sock" Oct 29 23:30:53.184855 containerd[1597]: time="2025-10-29T23:30:53.184816188Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 29 23:30:53.744068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059069546.mount: Deactivated successfully. Oct 29 23:30:54.686430 containerd[1597]: time="2025-10-29T23:30:54.686364534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:54.686851 containerd[1597]: time="2025-10-29T23:30:54.686823868Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Oct 29 23:30:54.687950 containerd[1597]: time="2025-10-29T23:30:54.687919634Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:54.691188 containerd[1597]: time="2025-10-29T23:30:54.691137588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:54.692068 containerd[1597]: time="2025-10-29T23:30:54.692038218Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.507182558s" Oct 29 23:30:54.692268 containerd[1597]: time="2025-10-29T23:30:54.692144042Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 29 23:30:54.692653 containerd[1597]: time="2025-10-29T23:30:54.692633389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 29 23:30:55.772218 containerd[1597]: time="2025-10-29T23:30:55.772175844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:55.773755 containerd[1597]: time="2025-10-29T23:30:55.773731538Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Oct 29 23:30:55.774686 containerd[1597]: time="2025-10-29T23:30:55.774642114Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:55.777070 containerd[1597]: time="2025-10-29T23:30:55.777044539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:55.778818 containerd[1597]: time="2025-10-29T23:30:55.778475434Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.085767201s" Oct 29 23:30:55.778818 containerd[1597]: time="2025-10-29T23:30:55.778506794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 29 23:30:55.779169 containerd[1597]: time="2025-10-29T23:30:55.779145486Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 29 23:30:56.676775 containerd[1597]: time="2025-10-29T23:30:56.676724044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:56.677212 containerd[1597]: time="2025-10-29T23:30:56.677186559Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Oct 29 23:30:56.678120 containerd[1597]: time="2025-10-29T23:30:56.678091521Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:56.681010 containerd[1597]: time="2025-10-29T23:30:56.680839518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:56.681854 containerd[1597]: time="2025-10-29T23:30:56.681808536Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 902.63097ms" Oct 29 23:30:56.681854 containerd[1597]: time="2025-10-29T23:30:56.681843854Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 29 23:30:56.682373 containerd[1597]: time="2025-10-29T23:30:56.682296135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 29 23:30:57.695354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3736752008.mount: Deactivated successfully. Oct 29 23:30:57.871637 containerd[1597]: time="2025-10-29T23:30:57.871584942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:57.872541 containerd[1597]: time="2025-10-29T23:30:57.872500998Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Oct 29 23:30:57.873359 containerd[1597]: time="2025-10-29T23:30:57.873326106Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:57.874928 containerd[1597]: time="2025-10-29T23:30:57.874864994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:57.875612 containerd[1597]: time="2025-10-29T23:30:57.875348136Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.193017889s" Oct 29 23:30:57.875612 containerd[1597]: time="2025-10-29T23:30:57.875378906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 29 23:30:57.875887 containerd[1597]: time="2025-10-29T23:30:57.875860523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 29 23:30:58.370905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033145241.mount: Deactivated successfully. Oct 29 23:30:58.906233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 23:30:58.907730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:30:59.052037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:30:59.055797 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:30:59.096617 kubelet[2183]: E1029 23:30:59.096572 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:30:59.101722 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:30:59.101843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:30:59.102415 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.6M memory peak. Oct 29 23:30:59.504852 containerd[1597]: time="2025-10-29T23:30:59.504796586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:59.505321 containerd[1597]: time="2025-10-29T23:30:59.505286250Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Oct 29 23:30:59.506336 containerd[1597]: time="2025-10-29T23:30:59.506313847Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:59.508905 containerd[1597]: time="2025-10-29T23:30:59.508860228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:59.510128 containerd[1597]: time="2025-10-29T23:30:59.510091524Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.63413372s" Oct 29 23:30:59.510128 containerd[1597]: time="2025-10-29T23:30:59.510126883Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 29 23:30:59.510611 containerd[1597]: time="2025-10-29T23:30:59.510585357Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 29 23:31:00.002038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387544544.mount: Deactivated successfully. Oct 29 23:31:00.010584 containerd[1597]: time="2025-10-29T23:31:00.010519667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:00.011213 containerd[1597]: time="2025-10-29T23:31:00.011184258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Oct 29 23:31:00.012223 containerd[1597]: time="2025-10-29T23:31:00.012163270Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:00.014533 containerd[1597]: time="2025-10-29T23:31:00.014500682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:00.015632 containerd[1597]: time="2025-10-29T23:31:00.015375568Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 504.759664ms" Oct 29 23:31:00.015632 containerd[1597]: time="2025-10-29T23:31:00.015408673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 29 23:31:00.015948 containerd[1597]: time="2025-10-29T23:31:00.015808302Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 29 23:31:03.264629 containerd[1597]: time="2025-10-29T23:31:03.264568807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:03.265171 containerd[1597]: time="2025-10-29T23:31:03.265095264Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Oct 29 23:31:03.266144 containerd[1597]: time="2025-10-29T23:31:03.266108684Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:03.269300 containerd[1597]: time="2025-10-29T23:31:03.269274271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:03.270702 containerd[1597]: time="2025-10-29T23:31:03.270182953Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.25434091s" Oct 29 23:31:03.270702 containerd[1597]: time="2025-10-29T23:31:03.270217840Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 29 23:31:09.155865 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 23:31:09.157374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:09.288250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:09.292368 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:31:09.325950 kubelet[2272]: E1029 23:31:09.325894 2272 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:31:09.328089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:31:09.328219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:31:09.328533 systemd[1]: kubelet.service: Consumed 134ms CPU time, 106.8M memory peak. Oct 29 23:31:09.469466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:09.469607 systemd[1]: kubelet.service: Consumed 134ms CPU time, 106.8M memory peak. Oct 29 23:31:09.472224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:09.495779 systemd[1]: Reload requested from client PID 2287 ('systemctl') (unit session-7.scope)... Oct 29 23:31:09.495792 systemd[1]: Reloading... Oct 29 23:31:09.573006 zram_generator::config[2332]: No configuration found. Oct 29 23:31:09.833697 systemd[1]: Reloading finished in 337 ms. Oct 29 23:31:09.895593 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 23:31:09.895693 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 23:31:09.895948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:09.896020 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95M memory peak. Oct 29 23:31:09.897608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:10.022719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:10.038320 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 23:31:10.070872 kubelet[2376]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 23:31:10.070872 kubelet[2376]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:31:10.071471 kubelet[2376]: I1029 23:31:10.071427 2376 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 23:31:11.655446 kubelet[2376]: I1029 23:31:11.655397 2376 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 23:31:11.655446 kubelet[2376]: I1029 23:31:11.655430 2376 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 23:31:11.656552 kubelet[2376]: I1029 23:31:11.656518 2376 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 23:31:11.656552 kubelet[2376]: I1029 23:31:11.656541 2376 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 23:31:11.656827 kubelet[2376]: I1029 23:31:11.656798 2376 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 23:31:11.741145 kubelet[2376]: E1029 23:31:11.741105 2376 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 23:31:11.741655 kubelet[2376]: I1029 23:31:11.741630 2376 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 23:31:11.747965 kubelet[2376]: I1029 23:31:11.747948 2376 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 23:31:11.751398 kubelet[2376]: I1029 23:31:11.751373 2376 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 23:31:11.751670 kubelet[2376]: I1029 23:31:11.751632 2376 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 23:31:11.751898 kubelet[2376]: I1029 23:31:11.751672 2376 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 23:31:11.752011 kubelet[2376]: I1029 23:31:11.751902 2376 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 23:31:11.752011 kubelet[2376]: I1029 23:31:11.751912 2376 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 23:31:11.752052 kubelet[2376]: I1029 23:31:11.752040 2376 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 23:31:11.757828 kubelet[2376]: I1029 23:31:11.757802 2376 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:31:11.759016 kubelet[2376]: I1029 23:31:11.758998 2376 kubelet.go:475] "Attempting to sync node with API server" Oct 29 23:31:11.759048 kubelet[2376]: I1029 23:31:11.759026 2376 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 23:31:11.759501 kubelet[2376]: I1029 23:31:11.759485 2376 kubelet.go:387] "Adding apiserver pod source" Oct 29 23:31:11.759527 kubelet[2376]: I1029 23:31:11.759513 2376 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 23:31:11.759606 kubelet[2376]: E1029 23:31:11.759568 2376 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 23:31:11.760621 kubelet[2376]: E1029 23:31:11.760598 2376 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 23:31:11.761249 kubelet[2376]: I1029 23:31:11.761231 2376 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 23:31:11.761929 kubelet[2376]: I1029 23:31:11.761908 2376 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 23:31:11.761968 kubelet[2376]: I1029 23:31:11.761945 2376 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 23:31:11.762006 kubelet[2376]: W1029 23:31:11.761996 2376 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 23:31:11.767217 kubelet[2376]: I1029 23:31:11.767198 2376 server.go:1262] "Started kubelet" Oct 29 23:31:11.767705 kubelet[2376]: I1029 23:31:11.767663 2376 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 23:31:11.767742 kubelet[2376]: I1029 23:31:11.767722 2376 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 23:31:11.768480 kubelet[2376]: I1029 23:31:11.767971 2376 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 23:31:11.768541 kubelet[2376]: I1029 23:31:11.768511 2376 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 23:31:11.773281 kubelet[2376]: I1029 23:31:11.771567 2376 server.go:310] "Adding debug handlers to kubelet server" Oct 29 23:31:11.773791 kubelet[2376]: E1029 23:31:11.771807 2376 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18731a2fc8d4aed5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 23:31:11.767162581 +0000 UTC m=+1.725900491,LastTimestamp:2025-10-29 23:31:11.767162581 +0000 UTC m=+1.725900491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 23:31:11.774274 kubelet[2376]: I1029 23:31:11.774224 2376 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 23:31:11.774274 kubelet[2376]: E1029 23:31:11.774237 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:11.775131 kubelet[2376]: I1029 23:31:11.774368 2376 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 23:31:11.775131 kubelet[2376]: I1029 23:31:11.774539 2376 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 23:31:11.775131 kubelet[2376]: I1029 23:31:11.774622 2376 reconciler.go:29] "Reconciler: start to sync state" Oct 29 23:31:11.775251 kubelet[2376]: E1029 23:31:11.775225 2376 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 23:31:11.775703 kubelet[2376]: I1029 23:31:11.775665 2376 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 23:31:11.776015 kubelet[2376]: E1029 23:31:11.775988 2376 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 23:31:11.776173 kubelet[2376]: I1029 23:31:11.767972 2376 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 23:31:11.776317 kubelet[2376]: E1029 23:31:11.776292 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Oct 29 23:31:11.776949 kubelet[2376]: I1029 23:31:11.776928 2376 factory.go:223] Registration of the containerd container factory successfully Oct 29 23:31:11.776949 kubelet[2376]: I1029 23:31:11.776946 2376 factory.go:223] Registration of the systemd container factory successfully Oct 29 23:31:11.787998 kubelet[2376]: I1029 23:31:11.787816 2376 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 23:31:11.788872 kubelet[2376]: I1029 23:31:11.788836 2376 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 23:31:11.788991 kubelet[2376]: I1029 23:31:11.788964 2376 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 23:31:11.789391 kubelet[2376]: I1029 23:31:11.789039 2376 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 23:31:11.789391 kubelet[2376]: E1029 23:31:11.789085 2376 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 23:31:11.793216 kubelet[2376]: E1029 23:31:11.793174 2376 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 23:31:11.794117 kubelet[2376]: I1029 23:31:11.794082 2376 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 23:31:11.794206 kubelet[2376]: I1029 23:31:11.794111 2376 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 23:31:11.794237 kubelet[2376]: I1029 23:31:11.794207 2376 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:31:11.797519 kubelet[2376]: E1029 23:31:11.797058 2376 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18731a2fc8d4aed5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 23:31:11.767162581 +0000 UTC m=+1.725900491,LastTimestamp:2025-10-29 23:31:11.767162581 +0000 UTC m=+1.725900491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 23:31:11.797519 kubelet[2376]: I1029 23:31:11.797218 2376 policy_none.go:49] "None policy: Start" Oct 29 23:31:11.797519 kubelet[2376]: I1029 23:31:11.797233 2376 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 23:31:11.797519 kubelet[2376]: I1029 23:31:11.797245 2376 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 23:31:11.800223 kubelet[2376]: I1029 23:31:11.800196 2376 policy_none.go:47] "Start" Oct 29 23:31:11.804744 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 23:31:11.816934 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 23:31:11.820105 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 23:31:11.831131 kubelet[2376]: E1029 23:31:11.831092 2376 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 23:31:11.831389 kubelet[2376]: I1029 23:31:11.831310 2376 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 23:31:11.831954 kubelet[2376]: I1029 23:31:11.831323 2376 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 23:31:11.832108 kubelet[2376]: I1029 23:31:11.832085 2376 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 23:31:11.833251 kubelet[2376]: E1029 23:31:11.833221 2376 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 23:31:11.833303 kubelet[2376]: E1029 23:31:11.833258 2376 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 23:31:11.900214 systemd[1]: Created slice kubepods-burstable-pod9eb728101cee2521d3c665a1082214df.slice - libcontainer container kubepods-burstable-pod9eb728101cee2521d3c665a1082214df.slice. Oct 29 23:31:11.914724 kubelet[2376]: E1029 23:31:11.914605 2376 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:31:11.917715 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 29 23:31:11.919820 kubelet[2376]: E1029 23:31:11.919792 2376 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:31:11.921517 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 29 23:31:11.924502 kubelet[2376]: E1029 23:31:11.924446 2376 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:31:11.933554 kubelet[2376]: I1029 23:31:11.933532 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:31:11.934055 kubelet[2376]: E1029 23:31:11.934025 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Oct 29 23:31:11.975518 kubelet[2376]: I1029 23:31:11.975447 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 29 23:31:11.975518 kubelet[2376]: I1029 23:31:11.975494 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:11.975518 kubelet[2376]: I1029 23:31:11.975514 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:11.975518 kubelet[2376]: I1029 23:31:11.975529 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:11.975716 kubelet[2376]: I1029 23:31:11.975547 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:11.975716 kubelet[2376]: I1029 23:31:11.975561 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9eb728101cee2521d3c665a1082214df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9eb728101cee2521d3c665a1082214df\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:11.975716 kubelet[2376]: I1029 23:31:11.975574 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9eb728101cee2521d3c665a1082214df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9eb728101cee2521d3c665a1082214df\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:11.975716 kubelet[2376]: I1029 23:31:11.975586 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9eb728101cee2521d3c665a1082214df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9eb728101cee2521d3c665a1082214df\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:11.975716 kubelet[2376]: I1029 23:31:11.975599 2376 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:11.977229 kubelet[2376]: E1029 23:31:11.977174 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Oct 29 23:31:12.137188 kubelet[2376]: I1029 23:31:12.137144 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:31:12.137843 kubelet[2376]: E1029 23:31:12.137713 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Oct 29 23:31:12.217361 kubelet[2376]: E1029 23:31:12.216861 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:12.218386 containerd[1597]: time="2025-10-29T23:31:12.217762583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9eb728101cee2521d3c665a1082214df,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:12.222876 kubelet[2376]: E1029 23:31:12.222814 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:12.223621 containerd[1597]: time="2025-10-29T23:31:12.223363293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:12.227013 kubelet[2376]: E1029 23:31:12.226964 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:12.227471 containerd[1597]: time="2025-10-29T23:31:12.227438595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:12.378349 kubelet[2376]: E1029 23:31:12.378311 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Oct 29 23:31:12.539540 kubelet[2376]: I1029 23:31:12.539197 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:31:12.539540 kubelet[2376]: E1029 23:31:12.539522 2376 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Oct 29 23:31:12.727044 kubelet[2376]: E1029 23:31:12.727008 2376 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 23:31:12.869284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758456372.mount: Deactivated successfully. Oct 29 23:31:12.872190 kubelet[2376]: E1029 23:31:12.872151 2376 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 23:31:12.878153 containerd[1597]: time="2025-10-29T23:31:12.878099358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:31:12.881540 containerd[1597]: time="2025-10-29T23:31:12.881502353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 29 23:31:12.885656 containerd[1597]: time="2025-10-29T23:31:12.885606826Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:31:12.887435 containerd[1597]: time="2025-10-29T23:31:12.887377971Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:31:12.888134 containerd[1597]: time="2025-10-29T23:31:12.888093256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 29 23:31:12.889774 containerd[1597]: time="2025-10-29T23:31:12.889709259Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:31:12.890441 containerd[1597]: time="2025-10-29T23:31:12.890412419Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 29 23:31:12.892516 containerd[1597]: time="2025-10-29T23:31:12.892459754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:31:12.893485 containerd[1597]: time="2025-10-29T23:31:12.892903571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 660.317647ms" Oct 29 23:31:12.897475 containerd[1597]: time="2025-10-29T23:31:12.897436135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 672.266083ms" Oct 29 23:31:12.898558 containerd[1597]: time="2025-10-29T23:31:12.898353060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 674.752953ms" Oct 29 23:31:12.917764 containerd[1597]: time="2025-10-29T23:31:12.917719169Z" level=info msg="connecting to shim 1ccb474f5fcbe852481435bdf06334b3a448c6a2ed6bfd7adc8f70a9a86ef2ca" address="unix:///run/containerd/s/cd6b2a96ca91b317bc0779ed646201e7747d16c287e13dfb9fd6c5acdf5b2217" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:12.929494 containerd[1597]: time="2025-10-29T23:31:12.929441355Z" level=info msg="connecting to shim 0ea61dfead073e3373625e4e1f7272bfd7db65b0a89134a8a94a383f4271f3b2" address="unix:///run/containerd/s/96983e97026c8a8a1935d39124ba885894917ad75009afe66554a934bdb1642e" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:12.929941 containerd[1597]: time="2025-10-29T23:31:12.929907981Z" level=info msg="connecting to shim 0b426650386fff1ae0327d3d3bee457c1b39b90fa68b48c3000fe0c5542aa565" address="unix:///run/containerd/s/4e6f137cb449b9539c111e9cb2e403dfab420f7f6a22723574642952c767ff37" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:12.938210 kubelet[2376]: E1029 23:31:12.936570 2376 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 23:31:12.943195 systemd[1]: Started cri-containerd-1ccb474f5fcbe852481435bdf06334b3a448c6a2ed6bfd7adc8f70a9a86ef2ca.scope - libcontainer container 1ccb474f5fcbe852481435bdf06334b3a448c6a2ed6bfd7adc8f70a9a86ef2ca. Oct 29 23:31:12.951676 systemd[1]: Started cri-containerd-0ea61dfead073e3373625e4e1f7272bfd7db65b0a89134a8a94a383f4271f3b2.scope - libcontainer container 0ea61dfead073e3373625e4e1f7272bfd7db65b0a89134a8a94a383f4271f3b2. Oct 29 23:31:12.966184 systemd[1]: Started cri-containerd-0b426650386fff1ae0327d3d3bee457c1b39b90fa68b48c3000fe0c5542aa565.scope - libcontainer container 0b426650386fff1ae0327d3d3bee457c1b39b90fa68b48c3000fe0c5542aa565. Oct 29 23:31:12.993676 containerd[1597]: time="2025-10-29T23:31:12.993574244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ccb474f5fcbe852481435bdf06334b3a448c6a2ed6bfd7adc8f70a9a86ef2ca\"" Oct 29 23:31:12.994952 kubelet[2376]: E1029 23:31:12.994739 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:13.001458 containerd[1597]: time="2025-10-29T23:31:13.001421225Z" level=info msg="CreateContainer within sandbox \"1ccb474f5fcbe852481435bdf06334b3a448c6a2ed6bfd7adc8f70a9a86ef2ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 23:31:13.002989 containerd[1597]: time="2025-10-29T23:31:13.002946677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9eb728101cee2521d3c665a1082214df,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ea61dfead073e3373625e4e1f7272bfd7db65b0a89134a8a94a383f4271f3b2\"" Oct 29 23:31:13.003831 kubelet[2376]: E1029 23:31:13.003674 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:13.007359 containerd[1597]: time="2025-10-29T23:31:13.007331004Z" level=info msg="CreateContainer within sandbox \"0ea61dfead073e3373625e4e1f7272bfd7db65b0a89134a8a94a383f4271f3b2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 23:31:13.013911 kubelet[2376]: E1029 23:31:13.013873 2376 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 23:31:13.015354 containerd[1597]: time="2025-10-29T23:31:13.015302941Z" level=info msg="Container ef2bee1aada9e7d0188afa68f5e8a38ec7247621fc922b69ace4bf973bcf2bd7: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:13.016772 containerd[1597]: time="2025-10-29T23:31:13.016735120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b426650386fff1ae0327d3d3bee457c1b39b90fa68b48c3000fe0c5542aa565\"" Oct 29 23:31:13.017419 kubelet[2376]: E1029 23:31:13.017398 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:13.031490 containerd[1597]: time="2025-10-29T23:31:13.031450045Z" level=info msg="CreateContainer within sandbox \"0b426650386fff1ae0327d3d3bee457c1b39b90fa68b48c3000fe0c5542aa565\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 23:31:13.032936 containerd[1597]: time="2025-10-29T23:31:13.032837608Z" level=info msg="Container 43221222320656f8fee19f8d94906365648c319ebf99a7e3eef2e4d583db50f0: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:13.037910 containerd[1597]: time="2025-10-29T23:31:13.037876043Z" level=info msg="CreateContainer within sandbox \"1ccb474f5fcbe852481435bdf06334b3a448c6a2ed6bfd7adc8f70a9a86ef2ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ef2bee1aada9e7d0188afa68f5e8a38ec7247621fc922b69ace4bf973bcf2bd7\"" Oct 29 23:31:13.039020 containerd[1597]: time="2025-10-29T23:31:13.038488457Z" level=info msg="StartContainer for \"ef2bee1aada9e7d0188afa68f5e8a38ec7247621fc922b69ace4bf973bcf2bd7\"" Oct 29 23:31:13.039649 containerd[1597]: time="2025-10-29T23:31:13.039617010Z" level=info msg="connecting to shim ef2bee1aada9e7d0188afa68f5e8a38ec7247621fc922b69ace4bf973bcf2bd7" address="unix:///run/containerd/s/cd6b2a96ca91b317bc0779ed646201e7747d16c287e13dfb9fd6c5acdf5b2217" protocol=ttrpc version=3 Oct 29 23:31:13.043010 containerd[1597]: time="2025-10-29T23:31:13.042955733Z" level=info msg="CreateContainer within sandbox \"0ea61dfead073e3373625e4e1f7272bfd7db65b0a89134a8a94a383f4271f3b2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"43221222320656f8fee19f8d94906365648c319ebf99a7e3eef2e4d583db50f0\"" Oct 29 23:31:13.043520 containerd[1597]: time="2025-10-29T23:31:13.043492040Z" level=info msg="StartContainer for \"43221222320656f8fee19f8d94906365648c319ebf99a7e3eef2e4d583db50f0\"" Oct 29 23:31:13.045028 containerd[1597]: time="2025-10-29T23:31:13.044996644Z" level=info msg="Container bd3da820c0ecabafeb99fdbb1887584284a808a5e2260e9418dd401fed36d854: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:13.046129 containerd[1597]: time="2025-10-29T23:31:13.046087023Z" level=info msg="connecting to shim 43221222320656f8fee19f8d94906365648c319ebf99a7e3eef2e4d583db50f0" address="unix:///run/containerd/s/96983e97026c8a8a1935d39124ba885894917ad75009afe66554a934bdb1642e" protocol=ttrpc version=3 Oct 29 23:31:13.052354 containerd[1597]: time="2025-10-29T23:31:13.052283102Z" level=info msg="CreateContainer within sandbox \"0b426650386fff1ae0327d3d3bee457c1b39b90fa68b48c3000fe0c5542aa565\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bd3da820c0ecabafeb99fdbb1887584284a808a5e2260e9418dd401fed36d854\"" Oct 29 23:31:13.053615 containerd[1597]: time="2025-10-29T23:31:13.052940131Z" level=info msg="StartContainer for \"bd3da820c0ecabafeb99fdbb1887584284a808a5e2260e9418dd401fed36d854\"" Oct 29 23:31:13.055675 containerd[1597]: time="2025-10-29T23:31:13.055608500Z" level=info msg="connecting to shim bd3da820c0ecabafeb99fdbb1887584284a808a5e2260e9418dd401fed36d854" address="unix:///run/containerd/s/4e6f137cb449b9539c111e9cb2e403dfab420f7f6a22723574642952c767ff37" protocol=ttrpc version=3 Oct 29 23:31:13.063185 systemd[1]: Started cri-containerd-ef2bee1aada9e7d0188afa68f5e8a38ec7247621fc922b69ace4bf973bcf2bd7.scope - libcontainer container ef2bee1aada9e7d0188afa68f5e8a38ec7247621fc922b69ace4bf973bcf2bd7. Oct 29 23:31:13.066596 systemd[1]: Started cri-containerd-43221222320656f8fee19f8d94906365648c319ebf99a7e3eef2e4d583db50f0.scope - libcontainer container 43221222320656f8fee19f8d94906365648c319ebf99a7e3eef2e4d583db50f0. Oct 29 23:31:13.082172 systemd[1]: Started cri-containerd-bd3da820c0ecabafeb99fdbb1887584284a808a5e2260e9418dd401fed36d854.scope - libcontainer container bd3da820c0ecabafeb99fdbb1887584284a808a5e2260e9418dd401fed36d854. Oct 29 23:31:13.114509 containerd[1597]: time="2025-10-29T23:31:13.114370528Z" level=info msg="StartContainer for \"43221222320656f8fee19f8d94906365648c319ebf99a7e3eef2e4d583db50f0\" returns successfully" Oct 29 23:31:13.127926 containerd[1597]: time="2025-10-29T23:31:13.127814011Z" level=info msg="StartContainer for \"ef2bee1aada9e7d0188afa68f5e8a38ec7247621fc922b69ace4bf973bcf2bd7\" returns successfully" Oct 29 23:31:13.146252 containerd[1597]: time="2025-10-29T23:31:13.146206617Z" level=info msg="StartContainer for \"bd3da820c0ecabafeb99fdbb1887584284a808a5e2260e9418dd401fed36d854\" returns successfully" Oct 29 23:31:13.180000 kubelet[2376]: E1029 23:31:13.179594 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Oct 29 23:31:13.341205 kubelet[2376]: I1029 23:31:13.341172 2376 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:31:13.804857 kubelet[2376]: E1029 23:31:13.804346 2376 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:31:13.805260 kubelet[2376]: E1029 23:31:13.805129 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:13.805449 kubelet[2376]: E1029 23:31:13.805429 2376 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:31:13.805564 kubelet[2376]: E1029 23:31:13.805546 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:13.807292 kubelet[2376]: E1029 23:31:13.807273 2376 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:31:13.807558 kubelet[2376]: E1029 23:31:13.807506 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:14.563359 kubelet[2376]: I1029 23:31:14.563308 2376 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 23:31:14.563359 kubelet[2376]: E1029 23:31:14.563358 2376 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 29 23:31:14.586644 kubelet[2376]: E1029 23:31:14.586591 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:14.686788 kubelet[2376]: E1029 23:31:14.686728 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:14.787003 kubelet[2376]: E1029 23:31:14.786942 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:14.807517 kubelet[2376]: E1029 23:31:14.807485 2376 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:31:14.807797 kubelet[2376]: E1029 23:31:14.807599 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:14.808635 kubelet[2376]: E1029 23:31:14.808616 2376 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:31:14.808776 kubelet[2376]: E1029 23:31:14.808759 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:14.887426 kubelet[2376]: E1029 23:31:14.887313 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:14.987453 kubelet[2376]: E1029 23:31:14.987403 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:15.088501 kubelet[2376]: E1029 23:31:15.088455 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:15.189264 kubelet[2376]: E1029 23:31:15.189224 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:15.290125 kubelet[2376]: E1029 23:31:15.290081 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:15.391092 kubelet[2376]: E1029 23:31:15.391041 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:15.491892 kubelet[2376]: E1029 23:31:15.491771 2376 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:15.579012 kubelet[2376]: I1029 23:31:15.577098 2376 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:15.587230 kubelet[2376]: I1029 23:31:15.586886 2376 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 23:31:15.593383 kubelet[2376]: I1029 23:31:15.593350 2376 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:15.761929 kubelet[2376]: I1029 23:31:15.761817 2376 apiserver.go:52] "Watching apiserver" Oct 29 23:31:15.766054 kubelet[2376]: E1029 23:31:15.766021 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:15.775124 kubelet[2376]: I1029 23:31:15.775074 2376 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 23:31:15.808696 kubelet[2376]: I1029 23:31:15.808600 2376 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 23:31:15.808696 kubelet[2376]: E1029 23:31:15.808653 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:15.814382 kubelet[2376]: E1029 23:31:15.814343 2376 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 23:31:15.814547 kubelet[2376]: E1029 23:31:15.814533 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:16.810663 kubelet[2376]: E1029 23:31:16.810550 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:16.810663 kubelet[2376]: E1029 23:31:16.810637 2376 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:16.973999 systemd[1]: Reload requested from client PID 2667 ('systemctl') (unit session-7.scope)... Oct 29 23:31:16.974018 systemd[1]: Reloading... Oct 29 23:31:17.051029 zram_generator::config[2712]: No configuration found. Oct 29 23:31:17.223458 systemd[1]: Reloading finished in 249 ms. Oct 29 23:31:17.246340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:17.263572 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 23:31:17.263788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:17.263840 systemd[1]: kubelet.service: Consumed 1.996s CPU time, 121.8M memory peak. Oct 29 23:31:17.266406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:17.442839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:17.459340 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 23:31:17.495294 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 23:31:17.495294 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:31:17.495647 kubelet[2753]: I1029 23:31:17.495299 2753 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 23:31:17.502773 kubelet[2753]: I1029 23:31:17.502717 2753 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 23:31:17.502773 kubelet[2753]: I1029 23:31:17.502757 2753 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 23:31:17.502926 kubelet[2753]: I1029 23:31:17.502794 2753 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 23:31:17.502926 kubelet[2753]: I1029 23:31:17.502801 2753 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 23:31:17.503103 kubelet[2753]: I1029 23:31:17.503068 2753 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 23:31:17.504410 kubelet[2753]: I1029 23:31:17.504374 2753 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 23:31:17.506721 kubelet[2753]: I1029 23:31:17.506678 2753 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 23:31:17.512034 kubelet[2753]: I1029 23:31:17.510676 2753 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 23:31:17.515751 kubelet[2753]: I1029 23:31:17.515702 2753 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 23:31:17.516229 kubelet[2753]: I1029 23:31:17.516196 2753 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 23:31:17.516395 kubelet[2753]: I1029 23:31:17.516232 2753 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 23:31:17.516395 kubelet[2753]: I1029 23:31:17.516395 2753 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 23:31:17.516501 kubelet[2753]: I1029 23:31:17.516405 2753 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 23:31:17.516501 kubelet[2753]: I1029 23:31:17.516433 2753 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 23:31:17.517362 kubelet[2753]: I1029 23:31:17.517345 2753 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:31:17.517507 kubelet[2753]: I1029 23:31:17.517496 2753 kubelet.go:475] "Attempting to sync node with API server" Oct 29 23:31:17.517548 kubelet[2753]: I1029 23:31:17.517515 2753 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 23:31:17.517548 kubelet[2753]: I1029 23:31:17.517541 2753 kubelet.go:387] "Adding apiserver pod source" Oct 29 23:31:17.517548 kubelet[2753]: I1029 23:31:17.517551 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 23:31:17.519105 kubelet[2753]: I1029 23:31:17.518490 2753 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 23:31:17.519201 kubelet[2753]: I1029 23:31:17.519155 2753 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 23:31:17.519201 kubelet[2753]: I1029 23:31:17.519183 2753 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 23:31:17.521049 kubelet[2753]: I1029 23:31:17.521020 2753 server.go:1262] "Started kubelet" Oct 29 23:31:17.521715 kubelet[2753]: I1029 23:31:17.521660 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 23:31:17.521890 kubelet[2753]: I1029 23:31:17.521833 2753 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 23:31:17.521945 kubelet[2753]: I1029 23:31:17.521918 2753 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 23:31:17.522175 kubelet[2753]: I1029 23:31:17.522151 2753 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 23:31:17.522241 kubelet[2753]: I1029 23:31:17.522217 2753 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 23:31:17.523125 kubelet[2753]: I1029 23:31:17.523095 2753 server.go:310] "Adding debug handlers to kubelet server" Oct 29 23:31:17.524200 kubelet[2753]: I1029 23:31:17.524168 2753 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 23:31:17.526615 kubelet[2753]: E1029 23:31:17.526559 2753 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:31:17.526703 kubelet[2753]: I1029 23:31:17.526628 2753 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 23:31:17.526869 kubelet[2753]: I1029 23:31:17.526846 2753 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 23:31:17.527044 kubelet[2753]: I1029 23:31:17.527026 2753 reconciler.go:29] "Reconciler: start to sync state" Oct 29 23:31:17.528030 kubelet[2753]: I1029 23:31:17.528000 2753 factory.go:223] Registration of the systemd container factory successfully Oct 29 23:31:17.528148 kubelet[2753]: I1029 23:31:17.528120 2753 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 23:31:17.529461 kubelet[2753]: I1029 23:31:17.529432 2753 factory.go:223] Registration of the containerd container factory successfully Oct 29 23:31:17.541790 kubelet[2753]: E1029 23:31:17.541358 2753 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 23:31:17.551743 kubelet[2753]: I1029 23:31:17.551693 2753 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 23:31:17.557992 kubelet[2753]: I1029 23:31:17.557168 2753 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 23:31:17.557992 kubelet[2753]: I1029 23:31:17.557198 2753 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 23:31:17.557992 kubelet[2753]: I1029 23:31:17.557221 2753 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 23:31:17.557992 kubelet[2753]: E1029 23:31:17.557290 2753 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 23:31:17.591101 kubelet[2753]: I1029 23:31:17.591070 2753 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 23:31:17.591101 kubelet[2753]: I1029 23:31:17.591087 2753 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 23:31:17.591101 kubelet[2753]: I1029 23:31:17.591109 2753 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:31:17.591296 kubelet[2753]: I1029 23:31:17.591247 2753 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 23:31:17.592064 kubelet[2753]: I1029 23:31:17.591256 2753 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 23:31:17.592064 kubelet[2753]: I1029 23:31:17.592053 2753 policy_none.go:49] "None policy: Start" Oct 29 23:31:17.592064 kubelet[2753]: I1029 23:31:17.592067 2753 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 23:31:17.592197 kubelet[2753]: I1029 23:31:17.592083 2753 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 23:31:17.592474 kubelet[2753]: I1029 23:31:17.592432 2753 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 29 23:31:17.592474 kubelet[2753]: I1029 23:31:17.592455 2753 policy_none.go:47] "Start" Oct 29 23:31:17.600188 kubelet[2753]: E1029 23:31:17.600150 2753 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 23:31:17.600353 kubelet[2753]: I1029 23:31:17.600321 2753 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 23:31:17.600475 kubelet[2753]: I1029 23:31:17.600353 2753 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 23:31:17.602083 kubelet[2753]: I1029 23:31:17.601999 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 23:31:17.602388 kubelet[2753]: E1029 23:31:17.602360 2753 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 23:31:17.659021 kubelet[2753]: I1029 23:31:17.658955 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 23:31:17.659191 kubelet[2753]: I1029 23:31:17.658955 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:17.660034 kubelet[2753]: I1029 23:31:17.659174 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:17.665647 kubelet[2753]: E1029 23:31:17.665595 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 23:31:17.666200 kubelet[2753]: E1029 23:31:17.666171 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:17.666449 kubelet[2753]: E1029 23:31:17.666431 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:17.703972 kubelet[2753]: I1029 23:31:17.703763 2753 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:31:17.710961 kubelet[2753]: I1029 23:31:17.710915 2753 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 23:31:17.711110 kubelet[2753]: I1029 23:31:17.711018 2753 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 23:31:17.731596 kubelet[2753]: I1029 23:31:17.731534 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:17.731596 kubelet[2753]: I1029 23:31:17.731588 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 29 23:31:17.731727 kubelet[2753]: I1029 23:31:17.731607 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9eb728101cee2521d3c665a1082214df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9eb728101cee2521d3c665a1082214df\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:17.731727 kubelet[2753]: I1029 23:31:17.731626 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9eb728101cee2521d3c665a1082214df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9eb728101cee2521d3c665a1082214df\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:17.731727 kubelet[2753]: I1029 23:31:17.731644 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9eb728101cee2521d3c665a1082214df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9eb728101cee2521d3c665a1082214df\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:17.731727 kubelet[2753]: I1029 23:31:17.731661 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:17.731727 kubelet[2753]: I1029 23:31:17.731675 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:17.731852 kubelet[2753]: I1029 23:31:17.731689 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:17.731852 kubelet[2753]: I1029 23:31:17.731706 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:31:17.967351 kubelet[2753]: E1029 23:31:17.966311 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:17.967795 kubelet[2753]: E1029 23:31:17.967757 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:17.967898 kubelet[2753]: E1029 23:31:17.967801 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:18.521015 kubelet[2753]: I1029 23:31:18.520962 2753 apiserver.go:52] "Watching apiserver" Oct 29 23:31:18.574949 kubelet[2753]: E1029 23:31:18.571913 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:18.574949 kubelet[2753]: I1029 23:31:18.572503 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 23:31:18.574949 kubelet[2753]: I1029 23:31:18.573405 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:18.576989 kubelet[2753]: E1029 23:31:18.576945 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 23:31:18.577344 kubelet[2753]: E1029 23:31:18.577321 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:18.579567 kubelet[2753]: E1029 23:31:18.578994 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 23:31:18.579567 kubelet[2753]: E1029 23:31:18.579139 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:18.593818 kubelet[2753]: I1029 23:31:18.593729 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.593698652 podStartE2EDuration="3.593698652s" podCreationTimestamp="2025-10-29 23:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:31:18.593101496 +0000 UTC m=+1.130678722" watchObservedRunningTime="2025-10-29 23:31:18.593698652 +0000 UTC m=+1.131275878" Oct 29 23:31:18.602103 kubelet[2753]: I1029 23:31:18.601420 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.601404151 podStartE2EDuration="3.601404151s" podCreationTimestamp="2025-10-29 23:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:31:18.601136739 +0000 UTC m=+1.138713965" watchObservedRunningTime="2025-10-29 23:31:18.601404151 +0000 UTC m=+1.138981417" Oct 29 23:31:18.625836 kubelet[2753]: I1029 23:31:18.625570 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.6255490889999997 podStartE2EDuration="3.625549089s" podCreationTimestamp="2025-10-29 23:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:31:18.613699463 +0000 UTC m=+1.151276689" watchObservedRunningTime="2025-10-29 23:31:18.625549089 +0000 UTC m=+1.163126355" Oct 29 23:31:18.632438 kubelet[2753]: I1029 23:31:18.632398 2753 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 23:31:19.573201 kubelet[2753]: E1029 23:31:19.573145 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:19.573613 kubelet[2753]: E1029 23:31:19.573491 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:22.476076 kubelet[2753]: E1029 23:31:22.476032 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:23.356529 kubelet[2753]: I1029 23:31:23.356492 2753 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 23:31:23.356848 containerd[1597]: time="2025-10-29T23:31:23.356805123Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 23:31:23.357149 kubelet[2753]: I1029 23:31:23.357016 2753 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 23:31:23.648802 kubelet[2753]: E1029 23:31:23.648638 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:23.916635 kubelet[2753]: E1029 23:31:23.916532 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:24.300622 systemd[1]: Created slice kubepods-besteffort-pod844cc5a2_6537_43e0_98a6_a2ccfbe4960b.slice - libcontainer container kubepods-besteffort-pod844cc5a2_6537_43e0_98a6_a2ccfbe4960b.slice. Oct 29 23:31:24.379796 kubelet[2753]: I1029 23:31:24.379744 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/844cc5a2-6537-43e0-98a6-a2ccfbe4960b-xtables-lock\") pod \"kube-proxy-tfkb2\" (UID: \"844cc5a2-6537-43e0-98a6-a2ccfbe4960b\") " pod="kube-system/kube-proxy-tfkb2" Oct 29 23:31:24.379796 kubelet[2753]: I1029 23:31:24.379785 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/844cc5a2-6537-43e0-98a6-a2ccfbe4960b-lib-modules\") pod \"kube-proxy-tfkb2\" (UID: \"844cc5a2-6537-43e0-98a6-a2ccfbe4960b\") " pod="kube-system/kube-proxy-tfkb2" Oct 29 23:31:24.379796 kubelet[2753]: I1029 23:31:24.379806 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/844cc5a2-6537-43e0-98a6-a2ccfbe4960b-kube-proxy\") pod \"kube-proxy-tfkb2\" (UID: \"844cc5a2-6537-43e0-98a6-a2ccfbe4960b\") " pod="kube-system/kube-proxy-tfkb2" Oct 29 23:31:24.380202 kubelet[2753]: I1029 23:31:24.379820 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jpj\" (UniqueName: \"kubernetes.io/projected/844cc5a2-6537-43e0-98a6-a2ccfbe4960b-kube-api-access-98jpj\") pod \"kube-proxy-tfkb2\" (UID: \"844cc5a2-6537-43e0-98a6-a2ccfbe4960b\") " pod="kube-system/kube-proxy-tfkb2" Oct 29 23:31:24.553541 systemd[1]: Created slice kubepods-besteffort-podeef28e36_3a1b_4cde_bbcf_18d9563fc2d3.slice - libcontainer container kubepods-besteffort-podeef28e36_3a1b_4cde_bbcf_18d9563fc2d3.slice. Oct 29 23:31:24.580771 kubelet[2753]: I1029 23:31:24.580726 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7mth\" (UniqueName: \"kubernetes.io/projected/eef28e36-3a1b-4cde-bbcf-18d9563fc2d3-kube-api-access-n7mth\") pod \"tigera-operator-65cdcdfd6d-4mp9l\" (UID: \"eef28e36-3a1b-4cde-bbcf-18d9563fc2d3\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4mp9l" Oct 29 23:31:24.580771 kubelet[2753]: I1029 23:31:24.580774 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eef28e36-3a1b-4cde-bbcf-18d9563fc2d3-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-4mp9l\" (UID: \"eef28e36-3a1b-4cde-bbcf-18d9563fc2d3\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4mp9l" Oct 29 23:31:24.582347 kubelet[2753]: E1029 23:31:24.582048 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:24.582347 kubelet[2753]: E1029 23:31:24.582168 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:24.615025 kubelet[2753]: E1029 23:31:24.614963 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:24.615855 containerd[1597]: time="2025-10-29T23:31:24.615800457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tfkb2,Uid:844cc5a2-6537-43e0-98a6-a2ccfbe4960b,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:24.633703 containerd[1597]: time="2025-10-29T23:31:24.633655863Z" level=info msg="connecting to shim f22f71b408ae2f6b9ec093307c3e96069fa5b94279430de5a57fa38b99d44f83" address="unix:///run/containerd/s/5c4b4aa2ea00a03911b39c2da5c4569e92ec82278afb1405e87154a3e8172feb" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:24.665199 systemd[1]: Started cri-containerd-f22f71b408ae2f6b9ec093307c3e96069fa5b94279430de5a57fa38b99d44f83.scope - libcontainer container f22f71b408ae2f6b9ec093307c3e96069fa5b94279430de5a57fa38b99d44f83. Oct 29 23:31:24.688935 containerd[1597]: time="2025-10-29T23:31:24.688894460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tfkb2,Uid:844cc5a2-6537-43e0-98a6-a2ccfbe4960b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f22f71b408ae2f6b9ec093307c3e96069fa5b94279430de5a57fa38b99d44f83\"" Oct 29 23:31:24.694359 kubelet[2753]: E1029 23:31:24.694329 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:24.701521 containerd[1597]: time="2025-10-29T23:31:24.701457815Z" level=info msg="CreateContainer within sandbox \"f22f71b408ae2f6b9ec093307c3e96069fa5b94279430de5a57fa38b99d44f83\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 23:31:24.711748 containerd[1597]: time="2025-10-29T23:31:24.711693959Z" level=info msg="Container c59c908ea6e4e8dfdd1c56d69d81e9bfa60289b3a821f821f3eb3b40cb24360e: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:24.719170 containerd[1597]: time="2025-10-29T23:31:24.719123160Z" level=info msg="CreateContainer within sandbox \"f22f71b408ae2f6b9ec093307c3e96069fa5b94279430de5a57fa38b99d44f83\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c59c908ea6e4e8dfdd1c56d69d81e9bfa60289b3a821f821f3eb3b40cb24360e\"" Oct 29 23:31:24.719729 containerd[1597]: time="2025-10-29T23:31:24.719704583Z" level=info msg="StartContainer for \"c59c908ea6e4e8dfdd1c56d69d81e9bfa60289b3a821f821f3eb3b40cb24360e\"" Oct 29 23:31:24.721433 containerd[1597]: time="2025-10-29T23:31:24.721404086Z" level=info msg="connecting to shim c59c908ea6e4e8dfdd1c56d69d81e9bfa60289b3a821f821f3eb3b40cb24360e" address="unix:///run/containerd/s/5c4b4aa2ea00a03911b39c2da5c4569e92ec82278afb1405e87154a3e8172feb" protocol=ttrpc version=3 Oct 29 23:31:24.742158 systemd[1]: Started cri-containerd-c59c908ea6e4e8dfdd1c56d69d81e9bfa60289b3a821f821f3eb3b40cb24360e.scope - libcontainer container c59c908ea6e4e8dfdd1c56d69d81e9bfa60289b3a821f821f3eb3b40cb24360e. Oct 29 23:31:24.774142 containerd[1597]: time="2025-10-29T23:31:24.774046124Z" level=info msg="StartContainer for \"c59c908ea6e4e8dfdd1c56d69d81e9bfa60289b3a821f821f3eb3b40cb24360e\" returns successfully" Oct 29 23:31:24.862281 containerd[1597]: time="2025-10-29T23:31:24.861788347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4mp9l,Uid:eef28e36-3a1b-4cde-bbcf-18d9563fc2d3,Namespace:tigera-operator,Attempt:0,}" Oct 29 23:31:24.879093 containerd[1597]: time="2025-10-29T23:31:24.879047888Z" level=info msg="connecting to shim 34f0d52f89dbd6ffde25b67a0eaeded4f8fa909a95047c5b0f3769473bd13b91" address="unix:///run/containerd/s/28589c4855a3818ab0bef8f093b25598b021d11d87b512615d1b6d9ad4afe066" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:24.905156 systemd[1]: Started cri-containerd-34f0d52f89dbd6ffde25b67a0eaeded4f8fa909a95047c5b0f3769473bd13b91.scope - libcontainer container 34f0d52f89dbd6ffde25b67a0eaeded4f8fa909a95047c5b0f3769473bd13b91. Oct 29 23:31:24.936850 containerd[1597]: time="2025-10-29T23:31:24.936795076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4mp9l,Uid:eef28e36-3a1b-4cde-bbcf-18d9563fc2d3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"34f0d52f89dbd6ffde25b67a0eaeded4f8fa909a95047c5b0f3769473bd13b91\"" Oct 29 23:31:24.938555 containerd[1597]: time="2025-10-29T23:31:24.938524183Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 23:31:25.587232 kubelet[2753]: E1029 23:31:25.587205 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:25.599520 kubelet[2753]: I1029 23:31:25.599202 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tfkb2" podStartSLOduration=1.5991852770000001 podStartE2EDuration="1.599185277s" podCreationTimestamp="2025-10-29 23:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:31:25.598375915 +0000 UTC m=+8.135953181" watchObservedRunningTime="2025-10-29 23:31:25.599185277 +0000 UTC m=+8.136762544" Oct 29 23:31:25.933217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603160771.mount: Deactivated successfully. Oct 29 23:31:26.626122 containerd[1597]: time="2025-10-29T23:31:26.626060677Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:26.626904 containerd[1597]: time="2025-10-29T23:31:26.626855834Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 29 23:31:26.627580 containerd[1597]: time="2025-10-29T23:31:26.627540620Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:26.629889 containerd[1597]: time="2025-10-29T23:31:26.629857604Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:26.630486 containerd[1597]: time="2025-10-29T23:31:26.630451661Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.691886235s" Oct 29 23:31:26.630538 containerd[1597]: time="2025-10-29T23:31:26.630485545Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 29 23:31:26.635512 containerd[1597]: time="2025-10-29T23:31:26.635127634Z" level=info msg="CreateContainer within sandbox \"34f0d52f89dbd6ffde25b67a0eaeded4f8fa909a95047c5b0f3769473bd13b91\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 23:31:26.667083 containerd[1597]: time="2025-10-29T23:31:26.666434781Z" level=info msg="Container a1899cc8006f4bda72bc2efe113ddab57d31c97e01c8cc6830bb8ccc1100edb7: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:26.671231 containerd[1597]: time="2025-10-29T23:31:26.671194522Z" level=info msg="CreateContainer within sandbox \"34f0d52f89dbd6ffde25b67a0eaeded4f8fa909a95047c5b0f3769473bd13b91\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a1899cc8006f4bda72bc2efe113ddab57d31c97e01c8cc6830bb8ccc1100edb7\"" Oct 29 23:31:26.671994 containerd[1597]: time="2025-10-29T23:31:26.671938074Z" level=info msg="StartContainer for \"a1899cc8006f4bda72bc2efe113ddab57d31c97e01c8cc6830bb8ccc1100edb7\"" Oct 29 23:31:26.672795 containerd[1597]: time="2025-10-29T23:31:26.672766434Z" level=info msg="connecting to shim a1899cc8006f4bda72bc2efe113ddab57d31c97e01c8cc6830bb8ccc1100edb7" address="unix:///run/containerd/s/28589c4855a3818ab0bef8f093b25598b021d11d87b512615d1b6d9ad4afe066" protocol=ttrpc version=3 Oct 29 23:31:26.709151 systemd[1]: Started cri-containerd-a1899cc8006f4bda72bc2efe113ddab57d31c97e01c8cc6830bb8ccc1100edb7.scope - libcontainer container a1899cc8006f4bda72bc2efe113ddab57d31c97e01c8cc6830bb8ccc1100edb7. Oct 29 23:31:26.735390 containerd[1597]: time="2025-10-29T23:31:26.735353367Z" level=info msg="StartContainer for \"a1899cc8006f4bda72bc2efe113ddab57d31c97e01c8cc6830bb8ccc1100edb7\" returns successfully" Oct 29 23:31:27.619221 kubelet[2753]: I1029 23:31:27.619154 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-4mp9l" podStartSLOduration=1.926036565 podStartE2EDuration="3.619136241s" podCreationTimestamp="2025-10-29 23:31:24 +0000 UTC" firstStartedPulling="2025-10-29 23:31:24.938163944 +0000 UTC m=+7.475741210" lastFinishedPulling="2025-10-29 23:31:26.63126366 +0000 UTC m=+9.168840886" observedRunningTime="2025-10-29 23:31:27.617074612 +0000 UTC m=+10.154652078" watchObservedRunningTime="2025-10-29 23:31:27.619136241 +0000 UTC m=+10.156713547" Oct 29 23:31:31.883374 update_engine[1578]: I20251029 23:31:31.883160 1578 update_attempter.cc:509] Updating boot flags... Oct 29 23:31:31.951379 sudo[1810]: pam_unix(sudo:session): session closed for user root Oct 29 23:31:31.954143 sshd[1809]: Connection closed by 10.0.0.1 port 48020 Oct 29 23:31:31.955095 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Oct 29 23:31:31.998785 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Oct 29 23:31:31.999083 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:48020.service: Deactivated successfully. Oct 29 23:31:32.006516 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 23:31:32.008074 systemd[1]: session-7.scope: Consumed 8.062s CPU time, 216.3M memory peak. Oct 29 23:31:32.020764 systemd-logind[1573]: Removed session 7. Oct 29 23:31:32.495004 kubelet[2753]: E1029 23:31:32.494769 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:40.390338 systemd[1]: Created slice kubepods-besteffort-pod79aaa6ff_7732_48a5_b6f5_fbd26860e7a6.slice - libcontainer container kubepods-besteffort-pod79aaa6ff_7732_48a5_b6f5_fbd26860e7a6.slice. Oct 29 23:31:40.490680 kubelet[2753]: I1029 23:31:40.490594 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/79aaa6ff-7732-48a5-b6f5-fbd26860e7a6-typha-certs\") pod \"calico-typha-5f89895bdf-9bh5c\" (UID: \"79aaa6ff-7732-48a5-b6f5-fbd26860e7a6\") " pod="calico-system/calico-typha-5f89895bdf-9bh5c" Oct 29 23:31:40.490680 kubelet[2753]: I1029 23:31:40.490641 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4lsk\" (UniqueName: \"kubernetes.io/projected/79aaa6ff-7732-48a5-b6f5-fbd26860e7a6-kube-api-access-j4lsk\") pod \"calico-typha-5f89895bdf-9bh5c\" (UID: \"79aaa6ff-7732-48a5-b6f5-fbd26860e7a6\") " pod="calico-system/calico-typha-5f89895bdf-9bh5c" Oct 29 23:31:40.491092 kubelet[2753]: I1029 23:31:40.490712 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79aaa6ff-7732-48a5-b6f5-fbd26860e7a6-tigera-ca-bundle\") pod \"calico-typha-5f89895bdf-9bh5c\" (UID: \"79aaa6ff-7732-48a5-b6f5-fbd26860e7a6\") " pod="calico-system/calico-typha-5f89895bdf-9bh5c" Oct 29 23:31:40.663643 systemd[1]: Created slice kubepods-besteffort-pode65d1f45_483e_45a3_a5df_78bca5df480e.slice - libcontainer container kubepods-besteffort-pode65d1f45_483e_45a3_a5df_78bca5df480e.slice. Oct 29 23:31:40.692814 kubelet[2753]: I1029 23:31:40.692769 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqjbm\" (UniqueName: \"kubernetes.io/projected/e65d1f45-483e-45a3-a5df-78bca5df480e-kube-api-access-pqjbm\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.692814 kubelet[2753]: I1029 23:31:40.692818 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e65d1f45-483e-45a3-a5df-78bca5df480e-cni-log-dir\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693014 kubelet[2753]: I1029 23:31:40.692856 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e65d1f45-483e-45a3-a5df-78bca5df480e-var-run-calico\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693014 kubelet[2753]: I1029 23:31:40.692875 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e65d1f45-483e-45a3-a5df-78bca5df480e-node-certs\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693014 kubelet[2753]: I1029 23:31:40.692898 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e65d1f45-483e-45a3-a5df-78bca5df480e-tigera-ca-bundle\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693014 kubelet[2753]: I1029 23:31:40.692912 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e65d1f45-483e-45a3-a5df-78bca5df480e-var-lib-calico\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693014 kubelet[2753]: I1029 23:31:40.692927 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e65d1f45-483e-45a3-a5df-78bca5df480e-xtables-lock\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693121 kubelet[2753]: I1029 23:31:40.692943 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e65d1f45-483e-45a3-a5df-78bca5df480e-cni-net-dir\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693121 kubelet[2753]: I1029 23:31:40.692956 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e65d1f45-483e-45a3-a5df-78bca5df480e-lib-modules\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693121 kubelet[2753]: I1029 23:31:40.692971 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e65d1f45-483e-45a3-a5df-78bca5df480e-cni-bin-dir\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693121 kubelet[2753]: I1029 23:31:40.693005 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e65d1f45-483e-45a3-a5df-78bca5df480e-flexvol-driver-host\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.693121 kubelet[2753]: I1029 23:31:40.693022 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e65d1f45-483e-45a3-a5df-78bca5df480e-policysync\") pod \"calico-node-2rkwv\" (UID: \"e65d1f45-483e-45a3-a5df-78bca5df480e\") " pod="calico-system/calico-node-2rkwv" Oct 29 23:31:40.698766 kubelet[2753]: E1029 23:31:40.698733 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:40.699376 containerd[1597]: time="2025-10-29T23:31:40.699337361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f89895bdf-9bh5c,Uid:79aaa6ff-7732-48a5-b6f5-fbd26860e7a6,Namespace:calico-system,Attempt:0,}" Oct 29 23:31:40.748306 containerd[1597]: time="2025-10-29T23:31:40.748260941Z" level=info msg="connecting to shim e6b82eefdf424bcc0ce2e39f2ca1ab2ca255a3856eedf6800d6f365da31557c1" address="unix:///run/containerd/s/8bbbadbd218b483b6776e75721034d224fe2da9643e00aab9ebff019d6c52a25" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:40.774459 systemd[1]: Started cri-containerd-e6b82eefdf424bcc0ce2e39f2ca1ab2ca255a3856eedf6800d6f365da31557c1.scope - libcontainer container e6b82eefdf424bcc0ce2e39f2ca1ab2ca255a3856eedf6800d6f365da31557c1. Oct 29 23:31:40.801994 kubelet[2753]: E1029 23:31:40.801781 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.801994 kubelet[2753]: W1029 23:31:40.801808 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.801994 kubelet[2753]: E1029 23:31:40.801833 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.803973 kubelet[2753]: E1029 23:31:40.803925 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.803973 kubelet[2753]: W1029 23:31:40.803942 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.803973 kubelet[2753]: E1029 23:31:40.803956 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.811712 kubelet[2753]: E1029 23:31:40.811685 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.811712 kubelet[2753]: W1029 23:31:40.811705 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.811839 kubelet[2753]: E1029 23:31:40.811733 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.844996 kubelet[2753]: E1029 23:31:40.844946 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568" Oct 29 23:31:40.860426 kubelet[2753]: E1029 23:31:40.860381 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.860426 kubelet[2753]: W1029 23:31:40.860423 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.860583 kubelet[2753]: E1029 23:31:40.860445 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.860635 kubelet[2753]: E1029 23:31:40.860606 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.860676 kubelet[2753]: W1029 23:31:40.860618 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.860701 kubelet[2753]: E1029 23:31:40.860675 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.860837 kubelet[2753]: E1029 23:31:40.860811 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.860837 kubelet[2753]: W1029 23:31:40.860823 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.860837 kubelet[2753]: E1029 23:31:40.860830 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.860984 kubelet[2753]: E1029 23:31:40.860968 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.861018 kubelet[2753]: W1029 23:31:40.861003 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.861018 kubelet[2753]: E1029 23:31:40.861012 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.861199 kubelet[2753]: E1029 23:31:40.861171 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.861199 kubelet[2753]: W1029 23:31:40.861183 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.861199 kubelet[2753]: E1029 23:31:40.861192 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.861348 kubelet[2753]: E1029 23:31:40.861336 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.861348 kubelet[2753]: W1029 23:31:40.861346 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.861396 kubelet[2753]: E1029 23:31:40.861354 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.861512 kubelet[2753]: E1029 23:31:40.861500 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.861534 kubelet[2753]: W1029 23:31:40.861522 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.861534 kubelet[2753]: E1029 23:31:40.861531 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.861672 kubelet[2753]: E1029 23:31:40.861661 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.861695 kubelet[2753]: W1029 23:31:40.861671 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.861695 kubelet[2753]: E1029 23:31:40.861679 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.861822 kubelet[2753]: E1029 23:31:40.861811 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.861845 kubelet[2753]: W1029 23:31:40.861822 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.861845 kubelet[2753]: E1029 23:31:40.861838 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.861959 kubelet[2753]: E1029 23:31:40.861949 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.861959 kubelet[2753]: W1029 23:31:40.861958 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.862020 kubelet[2753]: E1029 23:31:40.861965 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.862101 kubelet[2753]: E1029 23:31:40.862090 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.862101 kubelet[2753]: W1029 23:31:40.862099 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.862146 kubelet[2753]: E1029 23:31:40.862106 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.862242 kubelet[2753]: E1029 23:31:40.862231 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.862267 kubelet[2753]: W1029 23:31:40.862241 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.862267 kubelet[2753]: E1029 23:31:40.862250 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.862386 kubelet[2753]: E1029 23:31:40.862377 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.862409 kubelet[2753]: W1029 23:31:40.862386 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.862409 kubelet[2753]: E1029 23:31:40.862394 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.862542 kubelet[2753]: E1029 23:31:40.862530 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.862565 kubelet[2753]: W1029 23:31:40.862542 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.862565 kubelet[2753]: E1029 23:31:40.862550 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.862684 kubelet[2753]: E1029 23:31:40.862674 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.862706 kubelet[2753]: W1029 23:31:40.862684 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.862706 kubelet[2753]: E1029 23:31:40.862692 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.862852 kubelet[2753]: E1029 23:31:40.862840 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.862852 kubelet[2753]: W1029 23:31:40.862851 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.862891 kubelet[2753]: E1029 23:31:40.862858 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.863021 kubelet[2753]: E1029 23:31:40.863011 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.863041 kubelet[2753]: W1029 23:31:40.863020 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.863041 kubelet[2753]: E1029 23:31:40.863030 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.863172 kubelet[2753]: E1029 23:31:40.863162 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.863172 kubelet[2753]: W1029 23:31:40.863172 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.863210 kubelet[2753]: E1029 23:31:40.863179 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.863325 kubelet[2753]: E1029 23:31:40.863314 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.863347 kubelet[2753]: W1029 23:31:40.863326 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.863347 kubelet[2753]: E1029 23:31:40.863334 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.863491 kubelet[2753]: E1029 23:31:40.863477 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.863491 kubelet[2753]: W1029 23:31:40.863487 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.863535 kubelet[2753]: E1029 23:31:40.863496 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.876721 containerd[1597]: time="2025-10-29T23:31:40.876655424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f89895bdf-9bh5c,Uid:79aaa6ff-7732-48a5-b6f5-fbd26860e7a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6b82eefdf424bcc0ce2e39f2ca1ab2ca255a3856eedf6800d6f365da31557c1\"" Oct 29 23:31:40.877526 kubelet[2753]: E1029 23:31:40.877495 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:40.878427 containerd[1597]: time="2025-10-29T23:31:40.878388228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 23:31:40.894925 kubelet[2753]: E1029 23:31:40.894693 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.894925 kubelet[2753]: W1029 23:31:40.894924 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.896999 kubelet[2753]: E1029 23:31:40.894951 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.896999 kubelet[2753]: I1029 23:31:40.894997 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/10144807-b663-4cb3-833e-3110eaa2d568-socket-dir\") pod \"csi-node-driver-h2hkw\" (UID: \"10144807-b663-4cb3-833e-3110eaa2d568\") " pod="calico-system/csi-node-driver-h2hkw" Oct 29 23:31:40.896999 kubelet[2753]: E1029 23:31:40.895739 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.896999 kubelet[2753]: W1029 23:31:40.895756 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.896999 kubelet[2753]: E1029 23:31:40.895770 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.896999 kubelet[2753]: I1029 23:31:40.895795 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10144807-b663-4cb3-833e-3110eaa2d568-kubelet-dir\") pod \"csi-node-driver-h2hkw\" (UID: \"10144807-b663-4cb3-833e-3110eaa2d568\") " pod="calico-system/csi-node-driver-h2hkw" Oct 29 23:31:40.896999 kubelet[2753]: E1029 23:31:40.896338 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.896999 kubelet[2753]: W1029 23:31:40.896355 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.896999 kubelet[2753]: E1029 23:31:40.896369 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.897375 kubelet[2753]: E1029 23:31:40.897330 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.897375 kubelet[2753]: W1029 23:31:40.897364 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.897609 kubelet[2753]: E1029 23:31:40.897504 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.898026 kubelet[2753]: E1029 23:31:40.897986 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.898026 kubelet[2753]: W1029 23:31:40.898006 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.898026 kubelet[2753]: E1029 23:31:40.898021 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.898245 kubelet[2753]: I1029 23:31:40.898049 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/10144807-b663-4cb3-833e-3110eaa2d568-registration-dir\") pod \"csi-node-driver-h2hkw\" (UID: \"10144807-b663-4cb3-833e-3110eaa2d568\") " pod="calico-system/csi-node-driver-h2hkw" Oct 29 23:31:40.899143 kubelet[2753]: E1029 23:31:40.899081 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.899143 kubelet[2753]: W1029 23:31:40.899104 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.899143 kubelet[2753]: E1029 23:31:40.899120 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.899143 kubelet[2753]: I1029 23:31:40.899143 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/10144807-b663-4cb3-833e-3110eaa2d568-varrun\") pod \"csi-node-driver-h2hkw\" (UID: \"10144807-b663-4cb3-833e-3110eaa2d568\") " pod="calico-system/csi-node-driver-h2hkw" Oct 29 23:31:40.899657 kubelet[2753]: E1029 23:31:40.899620 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.899704 kubelet[2753]: W1029 23:31:40.899659 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.899845 kubelet[2753]: E1029 23:31:40.899781 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.900211 kubelet[2753]: E1029 23:31:40.900191 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.900279 kubelet[2753]: W1029 23:31:40.900208 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.900279 kubelet[2753]: E1029 23:31:40.900231 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.901282 kubelet[2753]: E1029 23:31:40.901262 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.901282 kubelet[2753]: W1029 23:31:40.901281 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.901371 kubelet[2753]: E1029 23:31:40.901297 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.901534 kubelet[2753]: E1029 23:31:40.901520 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.901534 kubelet[2753]: W1029 23:31:40.901533 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.901636 kubelet[2753]: E1029 23:31:40.901544 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.902180 kubelet[2753]: I1029 23:31:40.902155 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62frt\" (UniqueName: \"kubernetes.io/projected/10144807-b663-4cb3-833e-3110eaa2d568-kube-api-access-62frt\") pod \"csi-node-driver-h2hkw\" (UID: \"10144807-b663-4cb3-833e-3110eaa2d568\") " pod="calico-system/csi-node-driver-h2hkw" Oct 29 23:31:40.903081 kubelet[2753]: E1029 23:31:40.903023 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.903081 kubelet[2753]: W1029 23:31:40.903046 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.903081 kubelet[2753]: E1029 23:31:40.903062 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.903537 kubelet[2753]: E1029 23:31:40.903493 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.903537 kubelet[2753]: W1029 23:31:40.903510 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.903537 kubelet[2753]: E1029 23:31:40.903527 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.904448 kubelet[2753]: E1029 23:31:40.904409 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.904448 kubelet[2753]: W1029 23:31:40.904445 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.904518 kubelet[2753]: E1029 23:31:40.904464 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.905278 kubelet[2753]: E1029 23:31:40.905259 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.905278 kubelet[2753]: W1029 23:31:40.905277 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.905331 kubelet[2753]: E1029 23:31:40.905289 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.905466 kubelet[2753]: E1029 23:31:40.905453 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:40.905466 kubelet[2753]: W1029 23:31:40.905464 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:40.905509 kubelet[2753]: E1029 23:31:40.905473 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:40.968448 kubelet[2753]: E1029 23:31:40.968328 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:40.971183 containerd[1597]: time="2025-10-29T23:31:40.971140699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2rkwv,Uid:e65d1f45-483e-45a3-a5df-78bca5df480e,Namespace:calico-system,Attempt:0,}" Oct 29 23:31:41.005826 kubelet[2753]: E1029 23:31:41.005791 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.005826 kubelet[2753]: W1029 23:31:41.005815 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.005826 kubelet[2753]: E1029 23:31:41.005834 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.006055 kubelet[2753]: E1029 23:31:41.006039 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.006055 kubelet[2753]: W1029 23:31:41.006051 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.006113 kubelet[2753]: E1029 23:31:41.006060 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.006257 kubelet[2753]: E1029 23:31:41.006232 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.006257 kubelet[2753]: W1029 23:31:41.006244 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.006257 kubelet[2753]: E1029 23:31:41.006252 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.006477 kubelet[2753]: E1029 23:31:41.006462 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.006517 kubelet[2753]: W1029 23:31:41.006480 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.006517 kubelet[2753]: E1029 23:31:41.006490 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.006694 kubelet[2753]: E1029 23:31:41.006679 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.006694 kubelet[2753]: W1029 23:31:41.006691 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.006768 kubelet[2753]: E1029 23:31:41.006702 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.006929 kubelet[2753]: E1029 23:31:41.006916 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.006929 kubelet[2753]: W1029 23:31:41.006927 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.006992 kubelet[2753]: E1029 23:31:41.006935 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.007104 kubelet[2753]: E1029 23:31:41.007093 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.007104 kubelet[2753]: W1029 23:31:41.007102 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.007162 kubelet[2753]: E1029 23:31:41.007111 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.007334 kubelet[2753]: E1029 23:31:41.007307 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.007334 kubelet[2753]: W1029 23:31:41.007320 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.007334 kubelet[2753]: E1029 23:31:41.007329 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.007488 kubelet[2753]: E1029 23:31:41.007476 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.007488 kubelet[2753]: W1029 23:31:41.007486 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.007545 kubelet[2753]: E1029 23:31:41.007494 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.007707 kubelet[2753]: E1029 23:31:41.007684 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.007707 kubelet[2753]: W1029 23:31:41.007696 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.007783 kubelet[2753]: E1029 23:31:41.007706 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.008008 kubelet[2753]: E1029 23:31:41.007988 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.008008 kubelet[2753]: W1029 23:31:41.008001 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.008072 kubelet[2753]: E1029 23:31:41.008010 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.008170 kubelet[2753]: E1029 23:31:41.008157 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.008170 kubelet[2753]: W1029 23:31:41.008167 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.008219 kubelet[2753]: E1029 23:31:41.008175 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.008388 kubelet[2753]: E1029 23:31:41.008372 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.008388 kubelet[2753]: W1029 23:31:41.008382 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.008444 kubelet[2753]: E1029 23:31:41.008390 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.008951 kubelet[2753]: E1029 23:31:41.008921 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.008951 kubelet[2753]: W1029 23:31:41.008944 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.009042 kubelet[2753]: E1029 23:31:41.008959 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.009164 kubelet[2753]: E1029 23:31:41.009148 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.009164 kubelet[2753]: W1029 23:31:41.009161 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.009206 kubelet[2753]: E1029 23:31:41.009171 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.009316 kubelet[2753]: E1029 23:31:41.009305 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.009316 kubelet[2753]: W1029 23:31:41.009315 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.009362 kubelet[2753]: E1029 23:31:41.009323 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.009591 kubelet[2753]: E1029 23:31:41.009568 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.009591 kubelet[2753]: W1029 23:31:41.009584 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.009647 kubelet[2753]: E1029 23:31:41.009598 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.009759 containerd[1597]: time="2025-10-29T23:31:41.009723557Z" level=info msg="connecting to shim 4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1" address="unix:///run/containerd/s/dc3208e3b7d49df5de181c862a90faa121165ef8421a820e977b7b99328eac1d" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:41.010266 kubelet[2753]: E1029 23:31:41.010242 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.010266 kubelet[2753]: W1029 23:31:41.010265 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.010266 kubelet[2753]: E1029 23:31:41.010278 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.010471 kubelet[2753]: E1029 23:31:41.010456 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.010471 kubelet[2753]: W1029 23:31:41.010469 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.010534 kubelet[2753]: E1029 23:31:41.010480 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.010899 kubelet[2753]: E1029 23:31:41.010886 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.010927 kubelet[2753]: W1029 23:31:41.010900 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.010927 kubelet[2753]: E1029 23:31:41.010911 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.011121 kubelet[2753]: E1029 23:31:41.011107 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.011121 kubelet[2753]: W1029 23:31:41.011119 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.011237 kubelet[2753]: E1029 23:31:41.011130 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.011357 kubelet[2753]: E1029 23:31:41.011343 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.011396 kubelet[2753]: W1029 23:31:41.011360 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.011396 kubelet[2753]: E1029 23:31:41.011369 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.012999 kubelet[2753]: E1029 23:31:41.012868 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.012999 kubelet[2753]: W1029 23:31:41.012895 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.012999 kubelet[2753]: E1029 23:31:41.012909 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.013322 kubelet[2753]: E1029 23:31:41.013305 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.013322 kubelet[2753]: W1029 23:31:41.013319 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.013444 kubelet[2753]: E1029 23:31:41.013402 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.013703 kubelet[2753]: E1029 23:31:41.013688 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.013729 kubelet[2753]: W1029 23:31:41.013721 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.013760 kubelet[2753]: E1029 23:31:41.013733 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.022674 kubelet[2753]: E1029 23:31:41.022644 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:41.022776 kubelet[2753]: W1029 23:31:41.022697 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:41.022776 kubelet[2753]: E1029 23:31:41.022716 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:41.038180 systemd[1]: Started cri-containerd-4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1.scope - libcontainer container 4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1. Oct 29 23:31:41.090972 containerd[1597]: time="2025-10-29T23:31:41.090929380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2rkwv,Uid:e65d1f45-483e-45a3-a5df-78bca5df480e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1\"" Oct 29 23:31:41.091625 kubelet[2753]: E1029 23:31:41.091597 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:41.838033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422450347.mount: Deactivated successfully. Oct 29 23:31:42.384516 containerd[1597]: time="2025-10-29T23:31:42.384454579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:42.385232 containerd[1597]: time="2025-10-29T23:31:42.385140849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 29 23:31:42.386107 containerd[1597]: time="2025-10-29T23:31:42.386074411Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:42.389883 containerd[1597]: time="2025-10-29T23:31:42.389847059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:42.391515 containerd[1597]: time="2025-10-29T23:31:42.391438451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.51300498s" Oct 29 23:31:42.391515 containerd[1597]: time="2025-10-29T23:31:42.391474252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 29 23:31:42.393066 containerd[1597]: time="2025-10-29T23:31:42.393018721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 23:31:42.411055 containerd[1597]: time="2025-10-29T23:31:42.411013605Z" level=info msg="CreateContainer within sandbox \"e6b82eefdf424bcc0ce2e39f2ca1ab2ca255a3856eedf6800d6f365da31557c1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 23:31:42.417836 containerd[1597]: time="2025-10-29T23:31:42.417793548Z" level=info msg="Container 0a34638067121f849a1a89aa41713df6b88cf7f2c8a8f7d8da4f8e715333ca3a: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:42.432692 containerd[1597]: time="2025-10-29T23:31:42.432647251Z" level=info msg="CreateContainer within sandbox \"e6b82eefdf424bcc0ce2e39f2ca1ab2ca255a3856eedf6800d6f365da31557c1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0a34638067121f849a1a89aa41713df6b88cf7f2c8a8f7d8da4f8e715333ca3a\"" Oct 29 23:31:42.433216 containerd[1597]: time="2025-10-29T23:31:42.433189075Z" level=info msg="StartContainer for \"0a34638067121f849a1a89aa41713df6b88cf7f2c8a8f7d8da4f8e715333ca3a\"" Oct 29 23:31:42.434649 containerd[1597]: time="2025-10-29T23:31:42.434543216Z" level=info msg="connecting to shim 0a34638067121f849a1a89aa41713df6b88cf7f2c8a8f7d8da4f8e715333ca3a" address="unix:///run/containerd/s/8bbbadbd218b483b6776e75721034d224fe2da9643e00aab9ebff019d6c52a25" protocol=ttrpc version=3 Oct 29 23:31:42.455167 systemd[1]: Started cri-containerd-0a34638067121f849a1a89aa41713df6b88cf7f2c8a8f7d8da4f8e715333ca3a.scope - libcontainer container 0a34638067121f849a1a89aa41713df6b88cf7f2c8a8f7d8da4f8e715333ca3a. Oct 29 23:31:42.497328 containerd[1597]: time="2025-10-29T23:31:42.497288218Z" level=info msg="StartContainer for \"0a34638067121f849a1a89aa41713df6b88cf7f2c8a8f7d8da4f8e715333ca3a\" returns successfully" Oct 29 23:31:42.557802 kubelet[2753]: E1029 23:31:42.557751 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568" Oct 29 23:31:42.649515 kubelet[2753]: E1029 23:31:42.649105 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:42.677852 kubelet[2753]: E1029 23:31:42.677820 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.677852 kubelet[2753]: W1029 23:31:42.677842 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.678033 kubelet[2753]: E1029 23:31:42.677864 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.678217 kubelet[2753]: E1029 23:31:42.678196 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.678270 kubelet[2753]: W1029 23:31:42.678212 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.678270 kubelet[2753]: E1029 23:31:42.678262 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.679128 kubelet[2753]: E1029 23:31:42.679102 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.679128 kubelet[2753]: W1029 23:31:42.679129 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.679729 kubelet[2753]: E1029 23:31:42.679143 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.679729 kubelet[2753]: E1029 23:31:42.679505 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.679729 kubelet[2753]: W1029 23:31:42.679519 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.679729 kubelet[2753]: E1029 23:31:42.679530 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.680068 kubelet[2753]: E1029 23:31:42.680048 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.680068 kubelet[2753]: W1029 23:31:42.680063 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.680214 kubelet[2753]: E1029 23:31:42.680075 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.680242 kubelet[2753]: E1029 23:31:42.680228 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.680242 kubelet[2753]: W1029 23:31:42.680237 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.680287 kubelet[2753]: E1029 23:31:42.680246 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.680806 kubelet[2753]: E1029 23:31:42.680777 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.680806 kubelet[2753]: W1029 23:31:42.680792 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.680806 kubelet[2753]: E1029 23:31:42.680807 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.681660 kubelet[2753]: E1029 23:31:42.681639 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.681660 kubelet[2753]: W1029 23:31:42.681656 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.681660 kubelet[2753]: E1029 23:31:42.681669 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.681943 kubelet[2753]: E1029 23:31:42.681926 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.681943 kubelet[2753]: W1029 23:31:42.681939 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.682791 kubelet[2753]: E1029 23:31:42.682751 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.683014 kubelet[2753]: E1029 23:31:42.682963 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.683014 kubelet[2753]: W1029 23:31:42.683006 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.683014 kubelet[2753]: E1029 23:31:42.683017 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.683273 kubelet[2753]: E1029 23:31:42.683256 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.683273 kubelet[2753]: W1029 23:31:42.683269 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.683340 kubelet[2753]: E1029 23:31:42.683280 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.684953 kubelet[2753]: E1029 23:31:42.684692 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.684953 kubelet[2753]: W1029 23:31:42.684945 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.685270 kubelet[2753]: E1029 23:31:42.685237 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.686296 kubelet[2753]: E1029 23:31:42.686271 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.686296 kubelet[2753]: W1029 23:31:42.686289 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.686413 kubelet[2753]: E1029 23:31:42.686310 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.686534 kubelet[2753]: E1029 23:31:42.686518 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.686534 kubelet[2753]: W1029 23:31:42.686531 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.686534 kubelet[2753]: E1029 23:31:42.686542 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.686954 kubelet[2753]: E1029 23:31:42.686931 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.686954 kubelet[2753]: W1029 23:31:42.686947 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.687058 kubelet[2753]: E1029 23:31:42.686962 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.720561 kubelet[2753]: E1029 23:31:42.720478 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.720561 kubelet[2753]: W1029 23:31:42.720548 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.720742 kubelet[2753]: E1029 23:31:42.720576 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.720930 kubelet[2753]: E1029 23:31:42.720904 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.720930 kubelet[2753]: W1029 23:31:42.720922 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.721176 kubelet[2753]: E1029 23:31:42.720933 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.721272 kubelet[2753]: E1029 23:31:42.721252 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.721330 kubelet[2753]: W1029 23:31:42.721317 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.721382 kubelet[2753]: E1029 23:31:42.721372 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.721638 kubelet[2753]: E1029 23:31:42.721624 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.721718 kubelet[2753]: W1029 23:31:42.721705 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.721768 kubelet[2753]: E1029 23:31:42.721758 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.722667 kubelet[2753]: E1029 23:31:42.722031 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.722788 kubelet[2753]: W1029 23:31:42.722768 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.722849 kubelet[2753]: E1029 23:31:42.722837 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.723173 kubelet[2753]: E1029 23:31:42.723157 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.723277 kubelet[2753]: W1029 23:31:42.723263 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.723338 kubelet[2753]: E1029 23:31:42.723327 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.723989 kubelet[2753]: E1029 23:31:42.723663 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.723989 kubelet[2753]: W1029 23:31:42.723679 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.723989 kubelet[2753]: E1029 23:31:42.723713 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.724254 kubelet[2753]: E1029 23:31:42.724149 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.724326 kubelet[2753]: W1029 23:31:42.724311 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.724473 kubelet[2753]: E1029 23:31:42.724458 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.725079 kubelet[2753]: E1029 23:31:42.725058 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.725595 kubelet[2753]: W1029 23:31:42.725253 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.725595 kubelet[2753]: E1029 23:31:42.725280 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.726377 kubelet[2753]: E1029 23:31:42.726229 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.727800 kubelet[2753]: W1029 23:31:42.727771 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.727910 kubelet[2753]: E1029 23:31:42.727897 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.728234 kubelet[2753]: E1029 23:31:42.728219 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.728305 kubelet[2753]: W1029 23:31:42.728293 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.728376 kubelet[2753]: E1029 23:31:42.728365 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.728715 kubelet[2753]: E1029 23:31:42.728609 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.728715 kubelet[2753]: W1029 23:31:42.728621 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.728715 kubelet[2753]: E1029 23:31:42.728632 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.728879 kubelet[2753]: E1029 23:31:42.728867 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.728929 kubelet[2753]: W1029 23:31:42.728919 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.729019 kubelet[2753]: E1029 23:31:42.729006 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.729947 kubelet[2753]: E1029 23:31:42.729453 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.729947 kubelet[2753]: W1029 23:31:42.729468 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.729947 kubelet[2753]: E1029 23:31:42.729481 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.729947 kubelet[2753]: E1029 23:31:42.729890 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.729947 kubelet[2753]: W1029 23:31:42.729904 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.729947 kubelet[2753]: E1029 23:31:42.729917 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.731076 kubelet[2753]: E1029 23:31:42.731045 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.731076 kubelet[2753]: W1029 23:31:42.731068 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.731076 kubelet[2753]: E1029 23:31:42.731081 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.732013 kubelet[2753]: E1029 23:31:42.731437 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.732013 kubelet[2753]: W1029 23:31:42.731452 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.732013 kubelet[2753]: E1029 23:31:42.731464 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:42.733103 kubelet[2753]: E1029 23:31:42.733082 2753 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:42.733853 kubelet[2753]: W1029 23:31:42.733827 2753 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:42.733956 kubelet[2753]: E1029 23:31:42.733941 2753 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:43.479647 containerd[1597]: time="2025-10-29T23:31:43.479586669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:43.480532 containerd[1597]: time="2025-10-29T23:31:43.480144613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 29 23:31:43.481296 containerd[1597]: time="2025-10-29T23:31:43.481138695Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:43.483133 containerd[1597]: time="2025-10-29T23:31:43.483098539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:43.483881 containerd[1597]: time="2025-10-29T23:31:43.483850132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.090712005s" Oct 29 23:31:43.483932 containerd[1597]: time="2025-10-29T23:31:43.483884373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 29 23:31:43.487341 containerd[1597]: time="2025-10-29T23:31:43.487209836Z" level=info msg="CreateContainer within sandbox \"4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 23:31:43.494263 containerd[1597]: time="2025-10-29T23:31:43.494219256Z" level=info msg="Container fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:43.497167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1412277144.mount: Deactivated successfully. Oct 29 23:31:43.501635 containerd[1597]: time="2025-10-29T23:31:43.501590172Z" level=info msg="CreateContainer within sandbox \"4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8\"" Oct 29 23:31:43.502087 containerd[1597]: time="2025-10-29T23:31:43.502063512Z" level=info msg="StartContainer for \"fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8\"" Oct 29 23:31:43.504987 containerd[1597]: time="2025-10-29T23:31:43.504832831Z" level=info msg="connecting to shim fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8" address="unix:///run/containerd/s/dc3208e3b7d49df5de181c862a90faa121165ef8421a820e977b7b99328eac1d" protocol=ttrpc version=3 Oct 29 23:31:43.528300 systemd[1]: Started cri-containerd-fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8.scope - libcontainer container fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8. Oct 29 23:31:43.571001 containerd[1597]: time="2025-10-29T23:31:43.569191150Z" level=info msg="StartContainer for \"fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8\" returns successfully" Oct 29 23:31:43.581482 systemd[1]: cri-containerd-fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8.scope: Deactivated successfully. Oct 29 23:31:43.594706 containerd[1597]: time="2025-10-29T23:31:43.594664881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8\" id:\"fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8\" pid:3466 exited_at:{seconds:1761780703 nanos:594062856}" Oct 29 23:31:43.598751 containerd[1597]: time="2025-10-29T23:31:43.598708975Z" level=info msg="received exit event container_id:\"fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8\" id:\"fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8\" pid:3466 exited_at:{seconds:1761780703 nanos:594062856}" Oct 29 23:31:43.623064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc552068f827ccc5a0b6effd60f138751de489ea33258262b5c9c5050d5c3ba8-rootfs.mount: Deactivated successfully. Oct 29 23:31:43.651855 kubelet[2753]: I1029 23:31:43.651558 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 23:31:43.651855 kubelet[2753]: E1029 23:31:43.651794 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:43.652747 kubelet[2753]: E1029 23:31:43.652686 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:43.672786 kubelet[2753]: I1029 23:31:43.671954 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f89895bdf-9bh5c" podStartSLOduration=2.157115011 podStartE2EDuration="3.671939274s" podCreationTimestamp="2025-10-29 23:31:40 +0000 UTC" firstStartedPulling="2025-10-29 23:31:40.877925366 +0000 UTC m=+23.415502632" lastFinishedPulling="2025-10-29 23:31:42.392749669 +0000 UTC m=+24.930326895" observedRunningTime="2025-10-29 23:31:42.671220466 +0000 UTC m=+25.208797732" watchObservedRunningTime="2025-10-29 23:31:43.671939274 +0000 UTC m=+26.209516540" Oct 29 23:31:44.558216 kubelet[2753]: E1029 23:31:44.558168 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568" Oct 29 23:31:44.655368 kubelet[2753]: E1029 23:31:44.655311 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:44.659181 containerd[1597]: time="2025-10-29T23:31:44.659027234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 23:31:46.558458 kubelet[2753]: E1029 23:31:46.558404 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568" Oct 29 23:31:47.261203 containerd[1597]: time="2025-10-29T23:31:47.261122777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:47.262229 containerd[1597]: time="2025-10-29T23:31:47.262178936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 29 23:31:47.263541 containerd[1597]: time="2025-10-29T23:31:47.263149332Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:47.265131 containerd[1597]: time="2025-10-29T23:31:47.265099363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:47.265925 containerd[1597]: time="2025-10-29T23:31:47.265886912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.606810916s" Oct 29 23:31:47.265925 containerd[1597]: time="2025-10-29T23:31:47.265922353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 29 23:31:47.270426 containerd[1597]: time="2025-10-29T23:31:47.270269273Z" level=info msg="CreateContainer within sandbox \"4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 23:31:47.280451 containerd[1597]: time="2025-10-29T23:31:47.279993870Z" level=info msg="Container a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:47.280914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664167257.mount: Deactivated successfully. Oct 29 23:31:47.287260 containerd[1597]: time="2025-10-29T23:31:47.287207895Z" level=info msg="CreateContainer within sandbox \"4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa\"" Oct 29 23:31:47.288101 containerd[1597]: time="2025-10-29T23:31:47.288051366Z" level=info msg="StartContainer for \"a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa\"" Oct 29 23:31:47.289905 containerd[1597]: time="2025-10-29T23:31:47.289874833Z" level=info msg="connecting to shim a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa" address="unix:///run/containerd/s/dc3208e3b7d49df5de181c862a90faa121165ef8421a820e977b7b99328eac1d" protocol=ttrpc version=3 Oct 29 23:31:47.312187 systemd[1]: Started cri-containerd-a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa.scope - libcontainer container a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa. Oct 29 23:31:47.347112 containerd[1597]: time="2025-10-29T23:31:47.346952209Z" level=info msg="StartContainer for \"a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa\" returns successfully" Oct 29 23:31:47.664343 kubelet[2753]: E1029 23:31:47.664292 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:47.880215 systemd[1]: cri-containerd-a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa.scope: Deactivated successfully. Oct 29 23:31:47.880506 systemd[1]: cri-containerd-a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa.scope: Consumed 472ms CPU time, 177.7M memory peak, 2.6M read from disk, 165.9M written to disk. Oct 29 23:31:47.883154 containerd[1597]: time="2025-10-29T23:31:47.882855129Z" level=info msg="received exit event container_id:\"a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa\" id:\"a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa\" pid:3528 exited_at:{seconds:1761780707 nanos:881625564}" Oct 29 23:31:47.883283 containerd[1597]: time="2025-10-29T23:31:47.883198062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa\" id:\"a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa\" pid:3528 exited_at:{seconds:1761780707 nanos:881625564}" Oct 29 23:31:47.904230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2e614c9e22dc96a6718c998de7202b0195cbe383534a4f2883889239d5175aa-rootfs.mount: Deactivated successfully. Oct 29 23:31:47.966678 kubelet[2753]: I1029 23:31:47.966553 2753 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 29 23:31:48.053116 systemd[1]: Created slice kubepods-burstable-podf9e57f2b_e212_4e6d_8835_0563a00cdf57.slice - libcontainer container kubepods-burstable-podf9e57f2b_e212_4e6d_8835_0563a00cdf57.slice. Oct 29 23:31:48.063605 systemd[1]: Created slice kubepods-besteffort-pod17b6b460_22bb_4f11_b0a1_b884ed538e3c.slice - libcontainer container kubepods-besteffort-pod17b6b460_22bb_4f11_b0a1_b884ed538e3c.slice. Oct 29 23:31:48.070673 systemd[1]: Created slice kubepods-besteffort-podd94c374e_545a_4366_ade5_85d4ae3cccd1.slice - libcontainer container kubepods-besteffort-podd94c374e_545a_4366_ade5_85d4ae3cccd1.slice. Oct 29 23:31:48.082966 systemd[1]: Created slice kubepods-burstable-pod7b87c64e_f652_434c_854d_97f667ea0aaa.slice - libcontainer container kubepods-burstable-pod7b87c64e_f652_434c_854d_97f667ea0aaa.slice. Oct 29 23:31:48.089223 systemd[1]: Created slice kubepods-besteffort-pod145c01ef_4eaa_44de_b6fe_4efc0106e77d.slice - libcontainer container kubepods-besteffort-pod145c01ef_4eaa_44de_b6fe_4efc0106e77d.slice. Oct 29 23:31:48.095591 systemd[1]: Created slice kubepods-besteffort-pod6412d168_75ee_476d_8869_8306cb9ca8b0.slice - libcontainer container kubepods-besteffort-pod6412d168_75ee_476d_8869_8306cb9ca8b0.slice. Oct 29 23:31:48.101915 systemd[1]: Created slice kubepods-besteffort-pod78e681d5_73bb_40ef_a840_c0314e73e7fe.slice - libcontainer container kubepods-besteffort-pod78e681d5_73bb_40ef_a840_c0314e73e7fe.slice. Oct 29 23:31:48.107021 systemd[1]: Created slice kubepods-besteffort-podf800e220_9463_414f_af1d_c837bb08cf61.slice - libcontainer container kubepods-besteffort-podf800e220_9463_414f_af1d_c837bb08cf61.slice. Oct 29 23:31:48.161560 kubelet[2753]: I1029 23:31:48.161514 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xwwk\" (UniqueName: \"kubernetes.io/projected/17b6b460-22bb-4f11-b0a1-b884ed538e3c-kube-api-access-7xwwk\") pod \"calico-kube-controllers-65fc69cc4d-mslrm\" (UID: \"17b6b460-22bb-4f11-b0a1-b884ed538e3c\") " pod="calico-system/calico-kube-controllers-65fc69cc4d-mslrm" Oct 29 23:31:48.161852 kubelet[2753]: I1029 23:31:48.161589 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d94c374e-545a-4366-ade5-85d4ae3cccd1-goldmane-key-pair\") pod \"goldmane-7c778bb748-7tj9t\" (UID: \"d94c374e-545a-4366-ade5-85d4ae3cccd1\") " pod="calico-system/goldmane-7c778bb748-7tj9t" Oct 29 23:31:48.161852 kubelet[2753]: I1029 23:31:48.161654 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78e681d5-73bb-40ef-a840-c0314e73e7fe-calico-apiserver-certs\") pod \"calico-apiserver-79bdd98655-v9k4n\" (UID: \"78e681d5-73bb-40ef-a840-c0314e73e7fe\") " pod="calico-apiserver/calico-apiserver-79bdd98655-v9k4n" Oct 29 23:31:48.161852 kubelet[2753]: I1029 23:31:48.161672 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2plm\" (UniqueName: \"kubernetes.io/projected/f9e57f2b-e212-4e6d-8835-0563a00cdf57-kube-api-access-l2plm\") pod \"coredns-66bc5c9577-nqpkr\" (UID: \"f9e57f2b-e212-4e6d-8835-0563a00cdf57\") " pod="kube-system/coredns-66bc5c9577-nqpkr" Oct 29 23:31:48.161852 kubelet[2753]: I1029 23:31:48.161694 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f800e220-9463-414f-af1d-c837bb08cf61-whisker-backend-key-pair\") pod \"whisker-7c49f6c494-pmgkn\" (UID: \"f800e220-9463-414f-af1d-c837bb08cf61\") " pod="calico-system/whisker-7c49f6c494-pmgkn" Oct 29 23:31:48.161852 kubelet[2753]: I1029 23:31:48.161712 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f800e220-9463-414f-af1d-c837bb08cf61-whisker-ca-bundle\") pod \"whisker-7c49f6c494-pmgkn\" (UID: \"f800e220-9463-414f-af1d-c837bb08cf61\") " pod="calico-system/whisker-7c49f6c494-pmgkn" Oct 29 23:31:48.162004 kubelet[2753]: I1029 23:31:48.161728 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d94c374e-545a-4366-ade5-85d4ae3cccd1-config\") pod \"goldmane-7c778bb748-7tj9t\" (UID: \"d94c374e-545a-4366-ade5-85d4ae3cccd1\") " pod="calico-system/goldmane-7c778bb748-7tj9t" Oct 29 23:31:48.162004 kubelet[2753]: I1029 23:31:48.161777 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvpg9\" (UniqueName: \"kubernetes.io/projected/d94c374e-545a-4366-ade5-85d4ae3cccd1-kube-api-access-gvpg9\") pod \"goldmane-7c778bb748-7tj9t\" (UID: \"d94c374e-545a-4366-ade5-85d4ae3cccd1\") " pod="calico-system/goldmane-7c778bb748-7tj9t" Oct 29 23:31:48.162004 kubelet[2753]: I1029 23:31:48.161854 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qwgb\" (UniqueName: \"kubernetes.io/projected/78e681d5-73bb-40ef-a840-c0314e73e7fe-kube-api-access-9qwgb\") pod \"calico-apiserver-79bdd98655-v9k4n\" (UID: \"78e681d5-73bb-40ef-a840-c0314e73e7fe\") " pod="calico-apiserver/calico-apiserver-79bdd98655-v9k4n" Oct 29 23:31:48.162004 kubelet[2753]: I1029 23:31:48.161890 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/145c01ef-4eaa-44de-b6fe-4efc0106e77d-calico-apiserver-certs\") pod \"calico-apiserver-7c69cfcf75-tbp7d\" (UID: \"145c01ef-4eaa-44de-b6fe-4efc0106e77d\") " pod="calico-apiserver/calico-apiserver-7c69cfcf75-tbp7d" Oct 29 23:31:48.162004 kubelet[2753]: I1029 23:31:48.161909 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-662cc\" (UniqueName: \"kubernetes.io/projected/7b87c64e-f652-434c-854d-97f667ea0aaa-kube-api-access-662cc\") pod \"coredns-66bc5c9577-ds4w8\" (UID: \"7b87c64e-f652-434c-854d-97f667ea0aaa\") " pod="kube-system/coredns-66bc5c9577-ds4w8" Oct 29 23:31:48.162414 kubelet[2753]: I1029 23:31:48.161945 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9e57f2b-e212-4e6d-8835-0563a00cdf57-config-volume\") pod \"coredns-66bc5c9577-nqpkr\" (UID: \"f9e57f2b-e212-4e6d-8835-0563a00cdf57\") " pod="kube-system/coredns-66bc5c9577-nqpkr" Oct 29 23:31:48.162414 kubelet[2753]: I1029 23:31:48.161963 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17b6b460-22bb-4f11-b0a1-b884ed538e3c-tigera-ca-bundle\") pod \"calico-kube-controllers-65fc69cc4d-mslrm\" (UID: \"17b6b460-22bb-4f11-b0a1-b884ed538e3c\") " pod="calico-system/calico-kube-controllers-65fc69cc4d-mslrm" Oct 29 23:31:48.162414 kubelet[2753]: I1029 23:31:48.162023 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d94c374e-545a-4366-ade5-85d4ae3cccd1-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-7tj9t\" (UID: \"d94c374e-545a-4366-ade5-85d4ae3cccd1\") " pod="calico-system/goldmane-7c778bb748-7tj9t" Oct 29 23:31:48.162414 kubelet[2753]: I1029 23:31:48.162041 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzh9v\" (UniqueName: \"kubernetes.io/projected/145c01ef-4eaa-44de-b6fe-4efc0106e77d-kube-api-access-gzh9v\") pod \"calico-apiserver-7c69cfcf75-tbp7d\" (UID: \"145c01ef-4eaa-44de-b6fe-4efc0106e77d\") " pod="calico-apiserver/calico-apiserver-7c69cfcf75-tbp7d" Oct 29 23:31:48.162414 kubelet[2753]: I1029 23:31:48.162056 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b87c64e-f652-434c-854d-97f667ea0aaa-config-volume\") pod \"coredns-66bc5c9577-ds4w8\" (UID: \"7b87c64e-f652-434c-854d-97f667ea0aaa\") " pod="kube-system/coredns-66bc5c9577-ds4w8" Oct 29 23:31:48.162525 kubelet[2753]: I1029 23:31:48.162231 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6412d168-75ee-476d-8869-8306cb9ca8b0-calico-apiserver-certs\") pod \"calico-apiserver-79bdd98655-7jtgs\" (UID: \"6412d168-75ee-476d-8869-8306cb9ca8b0\") " pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" Oct 29 23:31:48.162525 kubelet[2753]: I1029 23:31:48.162261 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcgcc\" (UniqueName: \"kubernetes.io/projected/f800e220-9463-414f-af1d-c837bb08cf61-kube-api-access-mcgcc\") pod \"whisker-7c49f6c494-pmgkn\" (UID: \"f800e220-9463-414f-af1d-c837bb08cf61\") " pod="calico-system/whisker-7c49f6c494-pmgkn" Oct 29 23:31:48.162525 kubelet[2753]: I1029 23:31:48.162277 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98zk8\" (UniqueName: \"kubernetes.io/projected/6412d168-75ee-476d-8869-8306cb9ca8b0-kube-api-access-98zk8\") pod \"calico-apiserver-79bdd98655-7jtgs\" (UID: \"6412d168-75ee-476d-8869-8306cb9ca8b0\") " pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" Oct 29 23:31:48.360793 kubelet[2753]: E1029 23:31:48.360665 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:48.361959 containerd[1597]: time="2025-10-29T23:31:48.361867812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nqpkr,Uid:f9e57f2b-e212-4e6d-8835-0563a00cdf57,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:48.368667 containerd[1597]: time="2025-10-29T23:31:48.368629691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65fc69cc4d-mslrm,Uid:17b6b460-22bb-4f11-b0a1-b884ed538e3c,Namespace:calico-system,Attempt:0,}" Oct 29 23:31:48.375092 containerd[1597]: time="2025-10-29T23:31:48.375056799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-7tj9t,Uid:d94c374e-545a-4366-ade5-85d4ae3cccd1,Namespace:calico-system,Attempt:0,}" Oct 29 23:31:48.388823 kubelet[2753]: E1029 23:31:48.388483 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:48.389168 containerd[1597]: time="2025-10-29T23:31:48.389081535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ds4w8,Uid:7b87c64e-f652-434c-854d-97f667ea0aaa,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:48.395284 containerd[1597]: time="2025-10-29T23:31:48.395241554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c69cfcf75-tbp7d,Uid:145c01ef-4eaa-44de-b6fe-4efc0106e77d,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:31:48.400421 containerd[1597]: time="2025-10-29T23:31:48.400330934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bdd98655-7jtgs,Uid:6412d168-75ee-476d-8869-8306cb9ca8b0,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:31:48.406593 containerd[1597]: time="2025-10-29T23:31:48.406553354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bdd98655-v9k4n,Uid:78e681d5-73bb-40ef-a840-c0314e73e7fe,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:31:48.411804 containerd[1597]: time="2025-10-29T23:31:48.411573492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c49f6c494-pmgkn,Uid:f800e220-9463-414f-af1d-c837bb08cf61,Namespace:calico-system,Attempt:0,}" Oct 29 23:31:48.491766 containerd[1597]: time="2025-10-29T23:31:48.491077668Z" level=error msg="Failed to destroy network for sandbox \"8b663177bbce1dbd0516a931c90c261bef4049b202e47d1ffd91579ff4bc9073\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.492940 containerd[1597]: time="2025-10-29T23:31:48.492885772Z" level=error msg="Failed to destroy network for sandbox \"f5920ec4bf507e7feaa26b2894ef862dd0f4b065666f81a39d72bf4d3a1be179\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.493557 containerd[1597]: time="2025-10-29T23:31:48.493507634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c49f6c494-pmgkn,Uid:f800e220-9463-414f-af1d-c837bb08cf61,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b663177bbce1dbd0516a931c90c261bef4049b202e47d1ffd91579ff4bc9073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.494135 kubelet[2753]: E1029 23:31:48.494050 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b663177bbce1dbd0516a931c90c261bef4049b202e47d1ffd91579ff4bc9073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.494211 kubelet[2753]: E1029 23:31:48.494163 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b663177bbce1dbd0516a931c90c261bef4049b202e47d1ffd91579ff4bc9073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c49f6c494-pmgkn" Oct 29 23:31:48.494211 kubelet[2753]: E1029 23:31:48.494183 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b663177bbce1dbd0516a931c90c261bef4049b202e47d1ffd91579ff4bc9073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c49f6c494-pmgkn" Oct 29 23:31:48.494281 kubelet[2753]: E1029 23:31:48.494246 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c49f6c494-pmgkn_calico-system(f800e220-9463-414f-af1d-c837bb08cf61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c49f6c494-pmgkn_calico-system(f800e220-9463-414f-af1d-c837bb08cf61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b663177bbce1dbd0516a931c90c261bef4049b202e47d1ffd91579ff4bc9073\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c49f6c494-pmgkn" podUID="f800e220-9463-414f-af1d-c837bb08cf61" Oct 29 23:31:48.496070 containerd[1597]: time="2025-10-29T23:31:48.494802800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ds4w8,Uid:7b87c64e-f652-434c-854d-97f667ea0aaa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5920ec4bf507e7feaa26b2894ef862dd0f4b065666f81a39d72bf4d3a1be179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.496168 kubelet[2753]: E1029 23:31:48.496096 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5920ec4bf507e7feaa26b2894ef862dd0f4b065666f81a39d72bf4d3a1be179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.496203 kubelet[2753]: E1029 23:31:48.496153 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5920ec4bf507e7feaa26b2894ef862dd0f4b065666f81a39d72bf4d3a1be179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ds4w8" Oct 29 23:31:48.496203 kubelet[2753]: E1029 23:31:48.496193 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5920ec4bf507e7feaa26b2894ef862dd0f4b065666f81a39d72bf4d3a1be179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ds4w8" Oct 29 23:31:48.496276 kubelet[2753]: E1029 23:31:48.496244 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ds4w8_kube-system(7b87c64e-f652-434c-854d-97f667ea0aaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ds4w8_kube-system(7b87c64e-f652-434c-854d-97f667ea0aaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5920ec4bf507e7feaa26b2894ef862dd0f4b065666f81a39d72bf4d3a1be179\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ds4w8" podUID="7b87c64e-f652-434c-854d-97f667ea0aaa" Oct 29 23:31:48.498059 containerd[1597]: time="2025-10-29T23:31:48.498020674Z" level=error msg="Failed to destroy network for sandbox \"2bfb5c88f06ea6e2da3bb38e6078f16ecfab2390058e346bd83921c9bcf8a853\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.501215 containerd[1597]: time="2025-10-29T23:31:48.501155665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65fc69cc4d-mslrm,Uid:17b6b460-22bb-4f11-b0a1-b884ed538e3c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfb5c88f06ea6e2da3bb38e6078f16ecfab2390058e346bd83921c9bcf8a853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.501573 kubelet[2753]: E1029 23:31:48.501543 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfb5c88f06ea6e2da3bb38e6078f16ecfab2390058e346bd83921c9bcf8a853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.501759 kubelet[2753]: E1029 23:31:48.501733 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfb5c88f06ea6e2da3bb38e6078f16ecfab2390058e346bd83921c9bcf8a853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65fc69cc4d-mslrm" Oct 29 23:31:48.502134 kubelet[2753]: E1029 23:31:48.501828 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfb5c88f06ea6e2da3bb38e6078f16ecfab2390058e346bd83921c9bcf8a853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65fc69cc4d-mslrm" Oct 29 23:31:48.502134 kubelet[2753]: E1029 23:31:48.501881 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65fc69cc4d-mslrm_calico-system(17b6b460-22bb-4f11-b0a1-b884ed538e3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65fc69cc4d-mslrm_calico-system(17b6b460-22bb-4f11-b0a1-b884ed538e3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bfb5c88f06ea6e2da3bb38e6078f16ecfab2390058e346bd83921c9bcf8a853\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65fc69cc4d-mslrm" podUID="17b6b460-22bb-4f11-b0a1-b884ed538e3c" Oct 29 23:31:48.512236 containerd[1597]: time="2025-10-29T23:31:48.512171415Z" level=error msg="Failed to destroy network for sandbox \"77d2d0ee5f1bf119bada5c1884378004247f8e9391364be0d56f016b7d96b4b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.516403 containerd[1597]: time="2025-10-29T23:31:48.516341603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c69cfcf75-tbp7d,Uid:145c01ef-4eaa-44de-b6fe-4efc0106e77d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d2d0ee5f1bf119bada5c1884378004247f8e9391364be0d56f016b7d96b4b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.516662 kubelet[2753]: E1029 23:31:48.516579 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d2d0ee5f1bf119bada5c1884378004247f8e9391364be0d56f016b7d96b4b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.516662 kubelet[2753]: E1029 23:31:48.516630 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d2d0ee5f1bf119bada5c1884378004247f8e9391364be0d56f016b7d96b4b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c69cfcf75-tbp7d" Oct 29 23:31:48.516835 kubelet[2753]: E1029 23:31:48.516648 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d2d0ee5f1bf119bada5c1884378004247f8e9391364be0d56f016b7d96b4b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c69cfcf75-tbp7d" Oct 29 23:31:48.516835 kubelet[2753]: E1029 23:31:48.516734 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c69cfcf75-tbp7d_calico-apiserver(145c01ef-4eaa-44de-b6fe-4efc0106e77d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c69cfcf75-tbp7d_calico-apiserver(145c01ef-4eaa-44de-b6fe-4efc0106e77d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77d2d0ee5f1bf119bada5c1884378004247f8e9391364be0d56f016b7d96b4b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c69cfcf75-tbp7d" podUID="145c01ef-4eaa-44de-b6fe-4efc0106e77d" Oct 29 23:31:48.522997 containerd[1597]: time="2025-10-29T23:31:48.522640346Z" level=error msg="Failed to destroy network for sandbox \"05f76fc61adf2f49d0c30e3a6028df201bb863d0a73acc4bc9fa1ed6ca31c27d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.526832 containerd[1597]: time="2025-10-29T23:31:48.526781653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bdd98655-7jtgs,Uid:6412d168-75ee-476d-8869-8306cb9ca8b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f76fc61adf2f49d0c30e3a6028df201bb863d0a73acc4bc9fa1ed6ca31c27d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.527193 kubelet[2753]: E1029 23:31:48.527154 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f76fc61adf2f49d0c30e3a6028df201bb863d0a73acc4bc9fa1ed6ca31c27d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.527275 kubelet[2753]: E1029 23:31:48.527208 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f76fc61adf2f49d0c30e3a6028df201bb863d0a73acc4bc9fa1ed6ca31c27d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" Oct 29 23:31:48.527342 kubelet[2753]: E1029 23:31:48.527318 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f76fc61adf2f49d0c30e3a6028df201bb863d0a73acc4bc9fa1ed6ca31c27d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" Oct 29 23:31:48.527439 kubelet[2753]: E1029 23:31:48.527399 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79bdd98655-7jtgs_calico-apiserver(6412d168-75ee-476d-8869-8306cb9ca8b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79bdd98655-7jtgs_calico-apiserver(6412d168-75ee-476d-8869-8306cb9ca8b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05f76fc61adf2f49d0c30e3a6028df201bb863d0a73acc4bc9fa1ed6ca31c27d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" podUID="6412d168-75ee-476d-8869-8306cb9ca8b0" Oct 29 23:31:48.528874 containerd[1597]: time="2025-10-29T23:31:48.528709881Z" level=error msg="Failed to destroy network for sandbox \"4b8e104eab001e6b0aebaf4561b624c79272e887d317ebca02d763091707fc72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.529773 containerd[1597]: time="2025-10-29T23:31:48.529738278Z" level=error msg="Failed to destroy network for sandbox \"ac8e903cde4fe2b1f1a997d65aa9ab1971213e8b309d1d059e24c52c5387c8a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.531112 containerd[1597]: time="2025-10-29T23:31:48.530292257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-7tj9t,Uid:d94c374e-545a-4366-ade5-85d4ae3cccd1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8e104eab001e6b0aebaf4561b624c79272e887d317ebca02d763091707fc72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.531327 kubelet[2753]: E1029 23:31:48.531284 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8e104eab001e6b0aebaf4561b624c79272e887d317ebca02d763091707fc72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.531373 kubelet[2753]: E1029 23:31:48.531332 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8e104eab001e6b0aebaf4561b624c79272e887d317ebca02d763091707fc72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-7tj9t" Oct 29 23:31:48.531373 kubelet[2753]: E1029 23:31:48.531353 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b8e104eab001e6b0aebaf4561b624c79272e887d317ebca02d763091707fc72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-7tj9t" Oct 29 23:31:48.531456 kubelet[2753]: E1029 23:31:48.531411 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-7tj9t_calico-system(d94c374e-545a-4366-ade5-85d4ae3cccd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-7tj9t_calico-system(d94c374e-545a-4366-ade5-85d4ae3cccd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b8e104eab001e6b0aebaf4561b624c79272e887d317ebca02d763091707fc72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-7tj9t" podUID="d94c374e-545a-4366-ade5-85d4ae3cccd1" Oct 29 23:31:48.532220 containerd[1597]: time="2025-10-29T23:31:48.532180604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bdd98655-v9k4n,Uid:78e681d5-73bb-40ef-a840-c0314e73e7fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8e903cde4fe2b1f1a997d65aa9ab1971213e8b309d1d059e24c52c5387c8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.532404 kubelet[2753]: E1029 23:31:48.532374 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8e903cde4fe2b1f1a997d65aa9ab1971213e8b309d1d059e24c52c5387c8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.532475 kubelet[2753]: E1029 23:31:48.532415 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8e903cde4fe2b1f1a997d65aa9ab1971213e8b309d1d059e24c52c5387c8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79bdd98655-v9k4n" Oct 29 23:31:48.532475 kubelet[2753]: E1029 23:31:48.532465 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac8e903cde4fe2b1f1a997d65aa9ab1971213e8b309d1d059e24c52c5387c8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79bdd98655-v9k4n" Oct 29 23:31:48.534070 kubelet[2753]: E1029 23:31:48.532532 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79bdd98655-v9k4n_calico-apiserver(78e681d5-73bb-40ef-a840-c0314e73e7fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79bdd98655-v9k4n_calico-apiserver(78e681d5-73bb-40ef-a840-c0314e73e7fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac8e903cde4fe2b1f1a997d65aa9ab1971213e8b309d1d059e24c52c5387c8a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79bdd98655-v9k4n" podUID="78e681d5-73bb-40ef-a840-c0314e73e7fe" Oct 29 23:31:48.536207 containerd[1597]: time="2025-10-29T23:31:48.536136424Z" level=error msg="Failed to destroy network for sandbox \"59fefda955110f58de937cdb0da9d65ad452a4f0f57cfda9951b96c4105f6e43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.537992 containerd[1597]: time="2025-10-29T23:31:48.537119579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nqpkr,Uid:f9e57f2b-e212-4e6d-8835-0563a00cdf57,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59fefda955110f58de937cdb0da9d65ad452a4f0f57cfda9951b96c4105f6e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.539409 kubelet[2753]: E1029 23:31:48.539087 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59fefda955110f58de937cdb0da9d65ad452a4f0f57cfda9951b96c4105f6e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.539409 kubelet[2753]: E1029 23:31:48.539183 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59fefda955110f58de937cdb0da9d65ad452a4f0f57cfda9951b96c4105f6e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nqpkr" Oct 29 23:31:48.539409 kubelet[2753]: E1029 23:31:48.539218 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59fefda955110f58de937cdb0da9d65ad452a4f0f57cfda9951b96c4105f6e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nqpkr" Oct 29 23:31:48.539540 kubelet[2753]: E1029 23:31:48.539273 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nqpkr_kube-system(f9e57f2b-e212-4e6d-8835-0563a00cdf57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nqpkr_kube-system(f9e57f2b-e212-4e6d-8835-0563a00cdf57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59fefda955110f58de937cdb0da9d65ad452a4f0f57cfda9951b96c4105f6e43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nqpkr" podUID="f9e57f2b-e212-4e6d-8835-0563a00cdf57" Oct 29 23:31:48.563518 systemd[1]: Created slice kubepods-besteffort-pod10144807_b663_4cb3_833e_3110eaa2d568.slice - libcontainer container kubepods-besteffort-pod10144807_b663_4cb3_833e_3110eaa2d568.slice. Oct 29 23:31:48.566922 containerd[1597]: time="2025-10-29T23:31:48.566869953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2hkw,Uid:10144807-b663-4cb3-833e-3110eaa2d568,Namespace:calico-system,Attempt:0,}" Oct 29 23:31:48.610170 containerd[1597]: time="2025-10-29T23:31:48.610118245Z" level=error msg="Failed to destroy network for sandbox \"a58e069c937001a120c6380a6dccee1be82f74c0fb9abcfa4fd2917721570991\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.611540 containerd[1597]: time="2025-10-29T23:31:48.611376849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2hkw,Uid:10144807-b663-4cb3-833e-3110eaa2d568,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58e069c937001a120c6380a6dccee1be82f74c0fb9abcfa4fd2917721570991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.612622 kubelet[2753]: E1029 23:31:48.612216 2753 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58e069c937001a120c6380a6dccee1be82f74c0fb9abcfa4fd2917721570991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:31:48.612718 kubelet[2753]: E1029 23:31:48.612652 2753 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58e069c937001a120c6380a6dccee1be82f74c0fb9abcfa4fd2917721570991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h2hkw" Oct 29 23:31:48.612718 kubelet[2753]: E1029 23:31:48.612676 2753 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58e069c937001a120c6380a6dccee1be82f74c0fb9abcfa4fd2917721570991\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h2hkw" Oct 29 23:31:48.612903 kubelet[2753]: E1029 23:31:48.612864 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h2hkw_calico-system(10144807-b663-4cb3-833e-3110eaa2d568)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h2hkw_calico-system(10144807-b663-4cb3-833e-3110eaa2d568)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a58e069c937001a120c6380a6dccee1be82f74c0fb9abcfa4fd2917721570991\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568" Oct 29 23:31:48.668868 kubelet[2753]: E1029 23:31:48.668835 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:48.670056 containerd[1597]: time="2025-10-29T23:31:48.670004046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 23:31:51.596815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365786228.mount: Deactivated successfully. Oct 29 23:31:52.170825 containerd[1597]: time="2025-10-29T23:31:52.170760261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:52.171640 containerd[1597]: time="2025-10-29T23:31:52.171387960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 29 23:31:52.172490 containerd[1597]: time="2025-10-29T23:31:52.172417192Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:52.186888 containerd[1597]: time="2025-10-29T23:31:52.186837279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:52.203854 containerd[1597]: time="2025-10-29T23:31:52.203704641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.533642313s" Oct 29 23:31:52.203854 containerd[1597]: time="2025-10-29T23:31:52.203747363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 29 23:31:52.220366 containerd[1597]: time="2025-10-29T23:31:52.220325196Z" level=info msg="CreateContainer within sandbox \"4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 23:31:52.229773 containerd[1597]: time="2025-10-29T23:31:52.229729967Z" level=info msg="Container eaeb9ed9ac7cbbb6def997192e4af6462cddfe0691d870fd4197f87f77092e9b: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:52.230926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355536870.mount: Deactivated successfully. Oct 29 23:31:52.238691 containerd[1597]: time="2025-10-29T23:31:52.238629483Z" level=info msg="CreateContainer within sandbox \"4cb0276c69580931368ced25d0511918000d6b82da1e5adea3c49d0a337786f1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eaeb9ed9ac7cbbb6def997192e4af6462cddfe0691d870fd4197f87f77092e9b\"" Oct 29 23:31:52.239493 containerd[1597]: time="2025-10-29T23:31:52.239451188Z" level=info msg="StartContainer for \"eaeb9ed9ac7cbbb6def997192e4af6462cddfe0691d870fd4197f87f77092e9b\"" Oct 29 23:31:52.241704 containerd[1597]: time="2025-10-29T23:31:52.241053558Z" level=info msg="connecting to shim eaeb9ed9ac7cbbb6def997192e4af6462cddfe0691d870fd4197f87f77092e9b" address="unix:///run/containerd/s/dc3208e3b7d49df5de181c862a90faa121165ef8421a820e977b7b99328eac1d" protocol=ttrpc version=3 Oct 29 23:31:52.262191 systemd[1]: Started cri-containerd-eaeb9ed9ac7cbbb6def997192e4af6462cddfe0691d870fd4197f87f77092e9b.scope - libcontainer container eaeb9ed9ac7cbbb6def997192e4af6462cddfe0691d870fd4197f87f77092e9b. Oct 29 23:31:52.340326 containerd[1597]: time="2025-10-29T23:31:52.340261391Z" level=info msg="StartContainer for \"eaeb9ed9ac7cbbb6def997192e4af6462cddfe0691d870fd4197f87f77092e9b\" returns successfully" Oct 29 23:31:52.426409 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 23:31:52.426536 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 23:31:52.688022 kubelet[2753]: E1029 23:31:52.687955 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:52.704223 kubelet[2753]: I1029 23:31:52.704187 2753 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f800e220-9463-414f-af1d-c837bb08cf61-whisker-backend-key-pair\") pod \"f800e220-9463-414f-af1d-c837bb08cf61\" (UID: \"f800e220-9463-414f-af1d-c837bb08cf61\") " Oct 29 23:31:52.704359 kubelet[2753]: I1029 23:31:52.704241 2753 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcgcc\" (UniqueName: \"kubernetes.io/projected/f800e220-9463-414f-af1d-c837bb08cf61-kube-api-access-mcgcc\") pod \"f800e220-9463-414f-af1d-c837bb08cf61\" (UID: \"f800e220-9463-414f-af1d-c837bb08cf61\") " Oct 29 23:31:52.704359 kubelet[2753]: I1029 23:31:52.704265 2753 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f800e220-9463-414f-af1d-c837bb08cf61-whisker-ca-bundle\") pod \"f800e220-9463-414f-af1d-c837bb08cf61\" (UID: \"f800e220-9463-414f-af1d-c837bb08cf61\") " Oct 29 23:31:52.717747 systemd[1]: var-lib-kubelet-pods-f800e220\x2d9463\x2d414f\x2daf1d\x2dc837bb08cf61-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmcgcc.mount: Deactivated successfully. Oct 29 23:31:52.717866 systemd[1]: var-lib-kubelet-pods-f800e220\x2d9463\x2d414f\x2daf1d\x2dc837bb08cf61-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 23:31:52.720496 kubelet[2753]: I1029 23:31:52.720450 2753 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f800e220-9463-414f-af1d-c837bb08cf61-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f800e220-9463-414f-af1d-c837bb08cf61" (UID: "f800e220-9463-414f-af1d-c837bb08cf61"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 23:31:52.721386 kubelet[2753]: I1029 23:31:52.721336 2753 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f800e220-9463-414f-af1d-c837bb08cf61-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f800e220-9463-414f-af1d-c837bb08cf61" (UID: "f800e220-9463-414f-af1d-c837bb08cf61"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 23:31:52.721553 kubelet[2753]: I1029 23:31:52.721534 2753 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f800e220-9463-414f-af1d-c837bb08cf61-kube-api-access-mcgcc" (OuterVolumeSpecName: "kube-api-access-mcgcc") pod "f800e220-9463-414f-af1d-c837bb08cf61" (UID: "f800e220-9463-414f-af1d-c837bb08cf61"). InnerVolumeSpecName "kube-api-access-mcgcc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 23:31:52.804747 kubelet[2753]: I1029 23:31:52.804712 2753 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f800e220-9463-414f-af1d-c837bb08cf61-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 23:31:52.805078 kubelet[2753]: I1029 23:31:52.805043 2753 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mcgcc\" (UniqueName: \"kubernetes.io/projected/f800e220-9463-414f-af1d-c837bb08cf61-kube-api-access-mcgcc\") on node \"localhost\" DevicePath \"\"" Oct 29 23:31:52.805078 kubelet[2753]: I1029 23:31:52.805060 2753 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f800e220-9463-414f-af1d-c837bb08cf61-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 23:31:52.993722 systemd[1]: Removed slice kubepods-besteffort-podf800e220_9463_414f_af1d_c837bb08cf61.slice - libcontainer container kubepods-besteffort-podf800e220_9463_414f_af1d_c837bb08cf61.slice. Oct 29 23:31:53.022609 kubelet[2753]: I1029 23:31:53.022535 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2rkwv" podStartSLOduration=1.903533839 podStartE2EDuration="13.01576126s" podCreationTimestamp="2025-10-29 23:31:40 +0000 UTC" firstStartedPulling="2025-10-29 23:31:41.092410249 +0000 UTC m=+23.629987515" lastFinishedPulling="2025-10-29 23:31:52.20463767 +0000 UTC m=+34.742214936" observedRunningTime="2025-10-29 23:31:52.708309831 +0000 UTC m=+35.245887137" watchObservedRunningTime="2025-10-29 23:31:53.01576126 +0000 UTC m=+35.553338526" Oct 29 23:31:53.071794 systemd[1]: Created slice kubepods-besteffort-pod5e141988_8039_43c5_b128_4c31fe9414d8.slice - libcontainer container kubepods-besteffort-pod5e141988_8039_43c5_b128_4c31fe9414d8.slice. Oct 29 23:31:53.107213 kubelet[2753]: I1029 23:31:53.107158 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5e141988-8039-43c5-b128-4c31fe9414d8-whisker-backend-key-pair\") pod \"whisker-85b589694d-qs7sp\" (UID: \"5e141988-8039-43c5-b128-4c31fe9414d8\") " pod="calico-system/whisker-85b589694d-qs7sp" Oct 29 23:31:53.107213 kubelet[2753]: I1029 23:31:53.107214 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e141988-8039-43c5-b128-4c31fe9414d8-whisker-ca-bundle\") pod \"whisker-85b589694d-qs7sp\" (UID: \"5e141988-8039-43c5-b128-4c31fe9414d8\") " pod="calico-system/whisker-85b589694d-qs7sp" Oct 29 23:31:53.107415 kubelet[2753]: I1029 23:31:53.107295 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vdh\" (UniqueName: \"kubernetes.io/projected/5e141988-8039-43c5-b128-4c31fe9414d8-kube-api-access-x7vdh\") pod \"whisker-85b589694d-qs7sp\" (UID: \"5e141988-8039-43c5-b128-4c31fe9414d8\") " pod="calico-system/whisker-85b589694d-qs7sp" Oct 29 23:31:53.378909 containerd[1597]: time="2025-10-29T23:31:53.378575716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b589694d-qs7sp,Uid:5e141988-8039-43c5-b128-4c31fe9414d8,Namespace:calico-system,Attempt:0,}" Oct 29 23:31:53.560953 kubelet[2753]: I1029 23:31:53.560864 2753 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f800e220-9463-414f-af1d-c837bb08cf61" path="/var/lib/kubelet/pods/f800e220-9463-414f-af1d-c837bb08cf61/volumes" Oct 29 23:31:53.572930 systemd-networkd[1489]: cali7b10bad08ac: Link UP Oct 29 23:31:53.573669 systemd-networkd[1489]: cali7b10bad08ac: Gained carrier Oct 29 23:31:53.591024 containerd[1597]: 2025-10-29 23:31:53.399 [INFO][3941] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:31:53.591024 containerd[1597]: 2025-10-29 23:31:53.443 [INFO][3941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--85b589694d--qs7sp-eth0 whisker-85b589694d- calico-system 5e141988-8039-43c5-b128-4c31fe9414d8 913 0 2025-10-29 23:31:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85b589694d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-85b589694d-qs7sp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7b10bad08ac [] [] }} ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Namespace="calico-system" Pod="whisker-85b589694d-qs7sp" WorkloadEndpoint="localhost-k8s-whisker--85b589694d--qs7sp-" Oct 29 23:31:53.591024 containerd[1597]: 2025-10-29 23:31:53.444 [INFO][3941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Namespace="calico-system" Pod="whisker-85b589694d-qs7sp" WorkloadEndpoint="localhost-k8s-whisker--85b589694d--qs7sp-eth0" Oct 29 23:31:53.591024 containerd[1597]: 2025-10-29 23:31:53.517 [INFO][3954] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" HandleID="k8s-pod-network.c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Workload="localhost-k8s-whisker--85b589694d--qs7sp-eth0" Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.517 [INFO][3954] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" HandleID="k8s-pod-network.c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Workload="localhost-k8s-whisker--85b589694d--qs7sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bc4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-85b589694d-qs7sp", "timestamp":"2025-10-29 23:31:53.517270401 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.517 [INFO][3954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.517 [INFO][3954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.517 [INFO][3954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.531 [INFO][3954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" host="localhost" Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.538 [INFO][3954] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.543 [INFO][3954] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.545 [INFO][3954] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.549 [INFO][3954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:31:53.591255 containerd[1597]: 2025-10-29 23:31:53.549 [INFO][3954] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" host="localhost" Oct 29 23:31:53.591442 containerd[1597]: 2025-10-29 23:31:53.551 [INFO][3954] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4 Oct 29 23:31:53.591442 containerd[1597]: 2025-10-29 23:31:53.556 [INFO][3954] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" host="localhost" Oct 29 23:31:53.591442 containerd[1597]: 2025-10-29 23:31:53.562 [INFO][3954] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" host="localhost" Oct 29 23:31:53.591442 containerd[1597]: 2025-10-29 23:31:53.563 [INFO][3954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" host="localhost" Oct 29 23:31:53.591442 containerd[1597]: 2025-10-29 23:31:53.563 [INFO][3954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:31:53.591442 containerd[1597]: 2025-10-29 23:31:53.563 [INFO][3954] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" HandleID="k8s-pod-network.c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Workload="localhost-k8s-whisker--85b589694d--qs7sp-eth0" Oct 29 23:31:53.591547 containerd[1597]: 2025-10-29 23:31:53.565 [INFO][3941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Namespace="calico-system" Pod="whisker-85b589694d-qs7sp" WorkloadEndpoint="localhost-k8s-whisker--85b589694d--qs7sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85b589694d--qs7sp-eth0", GenerateName:"whisker-85b589694d-", Namespace:"calico-system", SelfLink:"", UID:"5e141988-8039-43c5-b128-4c31fe9414d8", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b589694d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-85b589694d-qs7sp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7b10bad08ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:31:53.591547 containerd[1597]: 2025-10-29 23:31:53.565 [INFO][3941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Namespace="calico-system" Pod="whisker-85b589694d-qs7sp" WorkloadEndpoint="localhost-k8s-whisker--85b589694d--qs7sp-eth0" Oct 29 23:31:53.591669 containerd[1597]: 2025-10-29 23:31:53.565 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b10bad08ac ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Namespace="calico-system" Pod="whisker-85b589694d-qs7sp" WorkloadEndpoint="localhost-k8s-whisker--85b589694d--qs7sp-eth0" Oct 29 23:31:53.591669 containerd[1597]: 2025-10-29 23:31:53.574 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Namespace="calico-system" Pod="whisker-85b589694d-qs7sp" WorkloadEndpoint="localhost-k8s-whisker--85b589694d--qs7sp-eth0" Oct 29 23:31:53.591711 containerd[1597]: 2025-10-29 23:31:53.574 [INFO][3941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Namespace="calico-system" Pod="whisker-85b589694d-qs7sp" WorkloadEndpoint="localhost-k8s-whisker--85b589694d--qs7sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85b589694d--qs7sp-eth0", GenerateName:"whisker-85b589694d-", Namespace:"calico-system", SelfLink:"", UID:"5e141988-8039-43c5-b128-4c31fe9414d8", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b589694d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4", Pod:"whisker-85b589694d-qs7sp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7b10bad08ac", MAC:"86:dc:81:c4:11:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:31:53.591766 containerd[1597]: 2025-10-29 23:31:53.588 [INFO][3941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" Namespace="calico-system" Pod="whisker-85b589694d-qs7sp" WorkloadEndpoint="localhost-k8s-whisker--85b589694d--qs7sp-eth0" Oct 29 23:31:53.635537 containerd[1597]: time="2025-10-29T23:31:53.635420309Z" level=info msg="connecting to shim c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4" address="unix:///run/containerd/s/3c84061a33e59d17c63c9e87afa35a9dea482b223f884152f1a7de49afe5454b" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:53.689812 kubelet[2753]: I1029 23:31:53.689760 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 23:31:53.690249 kubelet[2753]: E1029 23:31:53.690226 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:31:53.709180 systemd[1]: Started cri-containerd-c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4.scope - libcontainer container c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4. Oct 29 23:31:53.721095 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:31:53.740411 containerd[1597]: time="2025-10-29T23:31:53.740368540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b589694d-qs7sp,Uid:5e141988-8039-43c5-b128-4c31fe9414d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7f6306d8660a262820824702c1e0a119be9a7d492de48b8a02c858594a2fab4\"" Oct 29 23:31:53.742346 containerd[1597]: time="2025-10-29T23:31:53.742300438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 23:31:53.972523 containerd[1597]: time="2025-10-29T23:31:53.972233023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:31:53.990567 containerd[1597]: time="2025-10-29T23:31:53.989552103Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 23:31:53.990567 containerd[1597]: time="2025-10-29T23:31:53.989564864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 23:31:53.990706 kubelet[2753]: E1029 23:31:53.989834 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:31:53.992669 kubelet[2753]: E1029 23:31:53.992617 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:31:53.995434 kubelet[2753]: E1029 23:31:53.995377 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85b589694d-qs7sp_calico-system(5e141988-8039-43c5-b128-4c31fe9414d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 23:31:53.996423 containerd[1597]: time="2025-10-29T23:31:53.996392269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 23:31:54.209848 containerd[1597]: time="2025-10-29T23:31:54.209786453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:31:54.210809 containerd[1597]: time="2025-10-29T23:31:54.210770281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 23:31:54.210881 containerd[1597]: time="2025-10-29T23:31:54.210856444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 23:31:54.211102 kubelet[2753]: E1029 23:31:54.211065 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:31:54.211158 kubelet[2753]: E1029 23:31:54.211114 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:31:54.211224 kubelet[2753]: E1029 23:31:54.211201 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85b589694d-qs7sp_calico-system(5e141988-8039-43c5-b128-4c31fe9414d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 23:31:54.211456 kubelet[2753]: E1029 23:31:54.211247 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85b589694d-qs7sp" podUID="5e141988-8039-43c5-b128-4c31fe9414d8" Oct 29 23:31:54.695092 kubelet[2753]: E1029 23:31:54.694484 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85b589694d-qs7sp" podUID="5e141988-8039-43c5-b128-4c31fe9414d8" Oct 29 23:31:55.042304 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:44136.service - OpenSSH per-connection server daemon (10.0.0.1:44136). Oct 29 23:31:55.120495 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 44136 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:31:55.122414 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:31:55.127780 systemd-logind[1573]: New session 8 of user core. Oct 29 23:31:55.133119 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 23:31:55.278687 sshd[4151]: Connection closed by 10.0.0.1 port 44136 Oct 29 23:31:55.278486 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Oct 29 23:31:55.282804 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Oct 29 23:31:55.283109 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:44136.service: Deactivated successfully. Oct 29 23:31:55.285055 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 23:31:55.286666 systemd-logind[1573]: Removed session 8. Oct 29 23:31:55.349159 systemd-networkd[1489]: cali7b10bad08ac: Gained IPv6LL Oct 29 23:31:55.698789 kubelet[2753]: E1029 23:31:55.698731 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85b589694d-qs7sp" podUID="5e141988-8039-43c5-b128-4c31fe9414d8" Oct 29 23:31:59.561024 containerd[1597]: time="2025-10-29T23:31:59.560729811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bdd98655-v9k4n,Uid:78e681d5-73bb-40ef-a840-c0314e73e7fe,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:31:59.674128 systemd-networkd[1489]: cali82f93917678: Link UP Oct 29 23:31:59.674394 systemd-networkd[1489]: cali82f93917678: Gained carrier Oct 29 23:31:59.684969 containerd[1597]: 2025-10-29 23:31:59.586 [INFO][4260] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:31:59.684969 containerd[1597]: 2025-10-29 23:31:59.602 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0 calico-apiserver-79bdd98655- calico-apiserver 78e681d5-73bb-40ef-a840-c0314e73e7fe 848 0 2025-10-29 23:31:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79bdd98655 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79bdd98655-v9k4n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82f93917678 [] [] }} ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-v9k4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-" Oct 29 23:31:59.684969 containerd[1597]: 2025-10-29 23:31:59.602 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-v9k4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" Oct 29 23:31:59.684969 containerd[1597]: 2025-10-29 23:31:59.631 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" HandleID="k8s-pod-network.44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Workload="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.631 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" HandleID="k8s-pod-network.44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Workload="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c38c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79bdd98655-v9k4n", "timestamp":"2025-10-29 23:31:59.631114125 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.631 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.631 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.631 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.642 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" host="localhost" Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.648 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.653 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.655 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.657 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:31:59.685178 containerd[1597]: 2025-10-29 23:31:59.657 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" host="localhost" Oct 29 23:31:59.685755 containerd[1597]: 2025-10-29 23:31:59.658 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc Oct 29 23:31:59.685755 containerd[1597]: 2025-10-29 23:31:59.663 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" host="localhost" Oct 29 23:31:59.685755 containerd[1597]: 2025-10-29 23:31:59.667 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" host="localhost" Oct 29 23:31:59.685755 containerd[1597]: 2025-10-29 23:31:59.668 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" host="localhost" Oct 29 23:31:59.685755 containerd[1597]: 2025-10-29 23:31:59.668 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:31:59.685755 containerd[1597]: 2025-10-29 23:31:59.668 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" HandleID="k8s-pod-network.44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Workload="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" Oct 29 23:31:59.685892 containerd[1597]: 2025-10-29 23:31:59.670 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-v9k4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0", GenerateName:"calico-apiserver-79bdd98655-", Namespace:"calico-apiserver", SelfLink:"", UID:"78e681d5-73bb-40ef-a840-c0314e73e7fe", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bdd98655", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79bdd98655-v9k4n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82f93917678", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:31:59.685946 containerd[1597]: 2025-10-29 23:31:59.670 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-v9k4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" Oct 29 23:31:59.685946 containerd[1597]: 2025-10-29 23:31:59.670 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82f93917678 ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-v9k4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" Oct 29 23:31:59.685946 containerd[1597]: 2025-10-29 23:31:59.674 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-v9k4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" Oct 29 23:31:59.686045 containerd[1597]: 2025-10-29 23:31:59.674 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-v9k4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0", GenerateName:"calico-apiserver-79bdd98655-", Namespace:"calico-apiserver", SelfLink:"", UID:"78e681d5-73bb-40ef-a840-c0314e73e7fe", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bdd98655", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc", Pod:"calico-apiserver-79bdd98655-v9k4n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82f93917678", MAC:"2a:43:4b:a2:42:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:31:59.686098 containerd[1597]: 2025-10-29 23:31:59.683 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-v9k4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--v9k4n-eth0" Oct 29 23:31:59.712302 containerd[1597]: time="2025-10-29T23:31:59.712180431Z" level=info msg="connecting to shim 44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc" address="unix:///run/containerd/s/08be702a4d40fe53894915b5cc2b1655b25af7dbf997b7236c12690da9978eb0" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:59.746161 systemd[1]: Started cri-containerd-44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc.scope - libcontainer container 44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc. Oct 29 23:31:59.756799 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:31:59.778282 containerd[1597]: time="2025-10-29T23:31:59.778246955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bdd98655-v9k4n,Uid:78e681d5-73bb-40ef-a840-c0314e73e7fe,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"44b03cc876b38944955bdd18141d351eeb497e1c192af7d7ae2af8741b961ecc\"" Oct 29 23:31:59.783866 containerd[1597]: time="2025-10-29T23:31:59.783839177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:31:59.976906 containerd[1597]: time="2025-10-29T23:31:59.976734774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:31:59.977931 containerd[1597]: time="2025-10-29T23:31:59.977865082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:31:59.978044 containerd[1597]: time="2025-10-29T23:31:59.977954805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:31:59.980134 kubelet[2753]: E1029 23:31:59.978101 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:31:59.980134 kubelet[2753]: E1029 23:31:59.978152 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:31:59.980134 kubelet[2753]: E1029 23:31:59.978227 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79bdd98655-v9k4n_calico-apiserver(78e681d5-73bb-40ef-a840-c0314e73e7fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:31:59.980134 kubelet[2753]: E1029 23:31:59.978256 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79bdd98655-v9k4n" podUID="78e681d5-73bb-40ef-a840-c0314e73e7fe" Oct 29 23:32:00.296492 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:47794.service - OpenSSH per-connection server daemon (10.0.0.1:47794). Oct 29 23:32:00.357894 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 47794 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:00.359231 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:00.364849 systemd-logind[1573]: New session 9 of user core. Oct 29 23:32:00.369133 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 23:32:00.515685 sshd[4354]: Connection closed by 10.0.0.1 port 47794 Oct 29 23:32:00.516187 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:00.520256 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:47794.service: Deactivated successfully. Oct 29 23:32:00.521931 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 23:32:00.522671 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Oct 29 23:32:00.523527 systemd-logind[1573]: Removed session 9. Oct 29 23:32:00.565744 containerd[1597]: time="2025-10-29T23:32:00.565648164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bdd98655-7jtgs,Uid:6412d168-75ee-476d-8869-8306cb9ca8b0,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:32:00.701817 systemd-networkd[1489]: cali16d6cedb7b8: Link UP Oct 29 23:32:00.702080 systemd-networkd[1489]: cali16d6cedb7b8: Gained carrier Oct 29 23:32:00.711361 kubelet[2753]: E1029 23:32:00.711163 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79bdd98655-v9k4n" podUID="78e681d5-73bb-40ef-a840-c0314e73e7fe" Oct 29 23:32:00.735888 containerd[1597]: 2025-10-29 23:32:00.595 [INFO][4379] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:00.735888 containerd[1597]: 2025-10-29 23:32:00.614 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0 calico-apiserver-79bdd98655- calico-apiserver 6412d168-75ee-476d-8869-8306cb9ca8b0 850 0 2025-10-29 23:31:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79bdd98655 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79bdd98655-7jtgs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali16d6cedb7b8 [] [] }} ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-7jtgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-" Oct 29 23:32:00.735888 containerd[1597]: 2025-10-29 23:32:00.614 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-7jtgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" Oct 29 23:32:00.735888 containerd[1597]: 2025-10-29 23:32:00.640 [INFO][4394] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" HandleID="k8s-pod-network.b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Workload="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.641 [INFO][4394] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" HandleID="k8s-pod-network.b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Workload="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a2500), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79bdd98655-7jtgs", "timestamp":"2025-10-29 23:32:00.640918637 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.641 [INFO][4394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.641 [INFO][4394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.641 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.650 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" host="localhost" Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.654 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.659 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.661 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.663 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:00.736137 containerd[1597]: 2025-10-29 23:32:00.663 [INFO][4394] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" host="localhost" Oct 29 23:32:00.736417 containerd[1597]: 2025-10-29 23:32:00.665 [INFO][4394] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8 Oct 29 23:32:00.736417 containerd[1597]: 2025-10-29 23:32:00.674 [INFO][4394] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" host="localhost" Oct 29 23:32:00.736417 containerd[1597]: 2025-10-29 23:32:00.695 [INFO][4394] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" host="localhost" Oct 29 23:32:00.736417 containerd[1597]: 2025-10-29 23:32:00.695 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" host="localhost" Oct 29 23:32:00.736417 containerd[1597]: 2025-10-29 23:32:00.695 [INFO][4394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:00.736417 containerd[1597]: 2025-10-29 23:32:00.695 [INFO][4394] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" HandleID="k8s-pod-network.b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Workload="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" Oct 29 23:32:00.736561 containerd[1597]: 2025-10-29 23:32:00.697 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-7jtgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0", GenerateName:"calico-apiserver-79bdd98655-", Namespace:"calico-apiserver", SelfLink:"", UID:"6412d168-75ee-476d-8869-8306cb9ca8b0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bdd98655", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79bdd98655-7jtgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16d6cedb7b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:00.736632 containerd[1597]: 2025-10-29 23:32:00.698 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-7jtgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" Oct 29 23:32:00.736632 containerd[1597]: 2025-10-29 23:32:00.698 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16d6cedb7b8 ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-7jtgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" Oct 29 23:32:00.736632 containerd[1597]: 2025-10-29 23:32:00.703 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-7jtgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" Oct 29 23:32:00.736713 containerd[1597]: 2025-10-29 23:32:00.704 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-7jtgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0", GenerateName:"calico-apiserver-79bdd98655-", Namespace:"calico-apiserver", SelfLink:"", UID:"6412d168-75ee-476d-8869-8306cb9ca8b0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79bdd98655", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8", Pod:"calico-apiserver-79bdd98655-7jtgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16d6cedb7b8", MAC:"66:81:83:38:6e:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:00.736776 containerd[1597]: 2025-10-29 23:32:00.733 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" Namespace="calico-apiserver" Pod="calico-apiserver-79bdd98655-7jtgs" WorkloadEndpoint="localhost-k8s-calico--apiserver--79bdd98655--7jtgs-eth0" Oct 29 23:32:00.778394 containerd[1597]: time="2025-10-29T23:32:00.778353577Z" level=info msg="connecting to shim b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8" address="unix:///run/containerd/s/f00d823f11d12a5a2af00265b76dc415572c9ee6ba72e7afc92d78ce04d159fb" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:00.801177 systemd[1]: Started cri-containerd-b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8.scope - libcontainer container b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8. Oct 29 23:32:00.812407 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:00.831536 containerd[1597]: time="2025-10-29T23:32:00.831430898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79bdd98655-7jtgs,Uid:6412d168-75ee-476d-8869-8306cb9ca8b0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b8f3fe81953c08bef9c96855674a3df686978bffeebf48b482e6a2075ce887b8\"" Oct 29 23:32:00.833610 containerd[1597]: time="2025-10-29T23:32:00.833574871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:01.031682 containerd[1597]: time="2025-10-29T23:32:01.031641183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:01.033486 containerd[1597]: time="2025-10-29T23:32:01.033376745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:01.033486 containerd[1597]: time="2025-10-29T23:32:01.033439707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:01.033625 kubelet[2753]: E1029 23:32:01.033597 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:01.033882 kubelet[2753]: E1029 23:32:01.033640 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:01.033882 kubelet[2753]: E1029 23:32:01.033740 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79bdd98655-7jtgs_calico-apiserver(6412d168-75ee-476d-8869-8306cb9ca8b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:01.033882 kubelet[2753]: E1029 23:32:01.033784 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" podUID="6412d168-75ee-476d-8869-8306cb9ca8b0" Oct 29 23:32:01.712580 kubelet[2753]: E1029 23:32:01.712439 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" podUID="6412d168-75ee-476d-8869-8306cb9ca8b0" Oct 29 23:32:01.713771 kubelet[2753]: E1029 23:32:01.713722 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79bdd98655-v9k4n" podUID="78e681d5-73bb-40ef-a840-c0314e73e7fe" Oct 29 23:32:01.749140 systemd-networkd[1489]: cali82f93917678: Gained IPv6LL Oct 29 23:32:02.517159 systemd-networkd[1489]: cali16d6cedb7b8: Gained IPv6LL Oct 29 23:32:02.561182 kubelet[2753]: E1029 23:32:02.561140 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:02.575095 containerd[1597]: time="2025-10-29T23:32:02.562222914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nqpkr,Uid:f9e57f2b-e212-4e6d-8835-0563a00cdf57,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:02.575095 containerd[1597]: time="2025-10-29T23:32:02.564635092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65fc69cc4d-mslrm,Uid:17b6b460-22bb-4f11-b0a1-b884ed538e3c,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:02.575095 containerd[1597]: time="2025-10-29T23:32:02.570233745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c69cfcf75-tbp7d,Uid:145c01ef-4eaa-44de-b6fe-4efc0106e77d,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:32:02.716116 kubelet[2753]: E1029 23:32:02.716068 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" podUID="6412d168-75ee-476d-8869-8306cb9ca8b0" Oct 29 23:32:02.728436 systemd-networkd[1489]: cali22f43f35459: Link UP Oct 29 23:32:02.736673 systemd-networkd[1489]: cali22f43f35459: Gained carrier Oct 29 23:32:02.748988 containerd[1597]: 2025-10-29 23:32:02.617 [INFO][4502] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:02.748988 containerd[1597]: 2025-10-29 23:32:02.635 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0 calico-kube-controllers-65fc69cc4d- calico-system 17b6b460-22bb-4f11-b0a1-b884ed538e3c 845 0 2025-10-29 23:31:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65fc69cc4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-65fc69cc4d-mslrm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali22f43f35459 [] [] }} ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Namespace="calico-system" Pod="calico-kube-controllers-65fc69cc4d-mslrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-" Oct 29 23:32:02.748988 containerd[1597]: 2025-10-29 23:32:02.635 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Namespace="calico-system" Pod="calico-kube-controllers-65fc69cc4d-mslrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" Oct 29 23:32:02.748988 containerd[1597]: 2025-10-29 23:32:02.675 [INFO][4552] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" HandleID="k8s-pod-network.34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Workload="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.675 [INFO][4552] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" HandleID="k8s-pod-network.34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Workload="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ddc10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-65fc69cc4d-mslrm", "timestamp":"2025-10-29 23:32:02.675702375 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.675 [INFO][4552] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.675 [INFO][4552] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.675 [INFO][4552] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.685 [INFO][4552] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" host="localhost" Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.691 [INFO][4552] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.696 [INFO][4552] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.699 [INFO][4552] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.701 [INFO][4552] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:02.749237 containerd[1597]: 2025-10-29 23:32:02.702 [INFO][4552] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" host="localhost" Oct 29 23:32:02.749493 containerd[1597]: 2025-10-29 23:32:02.704 [INFO][4552] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318 Oct 29 23:32:02.749493 containerd[1597]: 2025-10-29 23:32:02.708 [INFO][4552] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" host="localhost" Oct 29 23:32:02.749493 containerd[1597]: 2025-10-29 23:32:02.713 [INFO][4552] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" host="localhost" Oct 29 23:32:02.749493 containerd[1597]: 2025-10-29 23:32:02.713 [INFO][4552] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" host="localhost" Oct 29 23:32:02.749493 containerd[1597]: 2025-10-29 23:32:02.713 [INFO][4552] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:02.749493 containerd[1597]: 2025-10-29 23:32:02.714 [INFO][4552] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" HandleID="k8s-pod-network.34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Workload="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" Oct 29 23:32:02.749606 containerd[1597]: 2025-10-29 23:32:02.719 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Namespace="calico-system" Pod="calico-kube-controllers-65fc69cc4d-mslrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0", GenerateName:"calico-kube-controllers-65fc69cc4d-", Namespace:"calico-system", SelfLink:"", UID:"17b6b460-22bb-4f11-b0a1-b884ed538e3c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65fc69cc4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-65fc69cc4d-mslrm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22f43f35459", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:02.749656 containerd[1597]: 2025-10-29 23:32:02.719 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Namespace="calico-system" Pod="calico-kube-controllers-65fc69cc4d-mslrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" Oct 29 23:32:02.749656 containerd[1597]: 2025-10-29 23:32:02.719 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22f43f35459 ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Namespace="calico-system" Pod="calico-kube-controllers-65fc69cc4d-mslrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" Oct 29 23:32:02.749656 containerd[1597]: 2025-10-29 23:32:02.737 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Namespace="calico-system" Pod="calico-kube-controllers-65fc69cc4d-mslrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" Oct 29 23:32:02.749714 containerd[1597]: 2025-10-29 23:32:02.737 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Namespace="calico-system" Pod="calico-kube-controllers-65fc69cc4d-mslrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0", GenerateName:"calico-kube-controllers-65fc69cc4d-", Namespace:"calico-system", SelfLink:"", UID:"17b6b460-22bb-4f11-b0a1-b884ed538e3c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65fc69cc4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318", Pod:"calico-kube-controllers-65fc69cc4d-mslrm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22f43f35459", MAC:"72:53:7d:e2:f9:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:02.749759 containerd[1597]: 2025-10-29 23:32:02.746 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" Namespace="calico-system" Pod="calico-kube-controllers-65fc69cc4d-mslrm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65fc69cc4d--mslrm-eth0" Oct 29 23:32:02.769668 containerd[1597]: time="2025-10-29T23:32:02.769558968Z" level=info msg="connecting to shim 34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318" address="unix:///run/containerd/s/611624822620a4297e10ac7ff2e4a6249d5fb5ce7e5338ccc07949aafdd730d1" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:02.793319 systemd[1]: Started cri-containerd-34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318.scope - libcontainer container 34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318. Oct 29 23:32:02.815940 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:02.831604 systemd-networkd[1489]: cali1333e153e22: Link UP Oct 29 23:32:02.833438 systemd-networkd[1489]: cali1333e153e22: Gained carrier Oct 29 23:32:02.848468 containerd[1597]: time="2025-10-29T23:32:02.848427925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65fc69cc4d-mslrm,Uid:17b6b460-22bb-4f11-b0a1-b884ed538e3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"34662c4f02d24e27b44f46579de0f78dec4c98db0e7a6ebf7ad904554acd3318\"" Oct 29 23:32:02.850507 containerd[1597]: time="2025-10-29T23:32:02.850451573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 23:32:02.854536 containerd[1597]: 2025-10-29 23:32:02.628 [INFO][4522] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:02.854536 containerd[1597]: 2025-10-29 23:32:02.647 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--nqpkr-eth0 coredns-66bc5c9577- kube-system f9e57f2b-e212-4e6d-8835-0563a00cdf57 842 0 2025-10-29 23:31:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-nqpkr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1333e153e22 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Namespace="kube-system" Pod="coredns-66bc5c9577-nqpkr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nqpkr-" Oct 29 23:32:02.854536 containerd[1597]: 2025-10-29 23:32:02.647 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Namespace="kube-system" Pod="coredns-66bc5c9577-nqpkr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" Oct 29 23:32:02.854536 containerd[1597]: 2025-10-29 23:32:02.697 [INFO][4559] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" HandleID="k8s-pod-network.0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Workload="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.698 [INFO][4559] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" HandleID="k8s-pod-network.0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Workload="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035cef0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-nqpkr", "timestamp":"2025-10-29 23:32:02.697877942 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.698 [INFO][4559] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.713 [INFO][4559] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.714 [INFO][4559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.787 [INFO][4559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" host="localhost" Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.794 [INFO][4559] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.798 [INFO][4559] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.800 [INFO][4559] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.803 [INFO][4559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:02.854718 containerd[1597]: 2025-10-29 23:32:02.803 [INFO][4559] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" host="localhost" Oct 29 23:32:02.854912 containerd[1597]: 2025-10-29 23:32:02.805 [INFO][4559] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705 Oct 29 23:32:02.854912 containerd[1597]: 2025-10-29 23:32:02.812 [INFO][4559] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" host="localhost" Oct 29 23:32:02.854912 containerd[1597]: 2025-10-29 23:32:02.818 [INFO][4559] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" host="localhost" Oct 29 23:32:02.854912 containerd[1597]: 2025-10-29 23:32:02.818 [INFO][4559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" host="localhost" Oct 29 23:32:02.854912 containerd[1597]: 2025-10-29 23:32:02.818 [INFO][4559] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:02.854912 containerd[1597]: 2025-10-29 23:32:02.818 [INFO][4559] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" HandleID="k8s-pod-network.0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Workload="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" Oct 29 23:32:02.855046 containerd[1597]: 2025-10-29 23:32:02.821 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Namespace="kube-system" Pod="coredns-66bc5c9577-nqpkr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nqpkr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f9e57f2b-e212-4e6d-8835-0563a00cdf57", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-nqpkr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1333e153e22", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:02.855046 containerd[1597]: 2025-10-29 23:32:02.822 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Namespace="kube-system" Pod="coredns-66bc5c9577-nqpkr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" Oct 29 23:32:02.855046 containerd[1597]: 2025-10-29 23:32:02.822 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1333e153e22 ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Namespace="kube-system" Pod="coredns-66bc5c9577-nqpkr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" Oct 29 23:32:02.855046 containerd[1597]: 2025-10-29 23:32:02.836 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Namespace="kube-system" Pod="coredns-66bc5c9577-nqpkr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" Oct 29 23:32:02.855046 containerd[1597]: 2025-10-29 23:32:02.837 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Namespace="kube-system" Pod="coredns-66bc5c9577-nqpkr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nqpkr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f9e57f2b-e212-4e6d-8835-0563a00cdf57", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705", Pod:"coredns-66bc5c9577-nqpkr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1333e153e22", MAC:"ce:d6:af:14:52:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:02.855046 containerd[1597]: 2025-10-29 23:32:02.847 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" Namespace="kube-system" Pod="coredns-66bc5c9577-nqpkr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nqpkr-eth0" Oct 29 23:32:02.872964 containerd[1597]: time="2025-10-29T23:32:02.872919787Z" level=info msg="connecting to shim 0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705" address="unix:///run/containerd/s/8c71c227ac229891b1cc2e4409d2264194d6374d13e0c6c08e255f6888a07840" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:02.893176 systemd[1]: Started cri-containerd-0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705.scope - libcontainer container 0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705. Oct 29 23:32:02.904787 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:02.961003 systemd-networkd[1489]: calif395810f1c8: Link UP Oct 29 23:32:02.961441 systemd-networkd[1489]: calif395810f1c8: Gained carrier Oct 29 23:32:02.974398 containerd[1597]: time="2025-10-29T23:32:02.974362841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nqpkr,Uid:f9e57f2b-e212-4e6d-8835-0563a00cdf57,Namespace:kube-system,Attempt:0,} returns sandbox id \"0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705\"" Oct 29 23:32:02.975645 kubelet[2753]: E1029 23:32:02.975618 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:02.980805 containerd[1597]: time="2025-10-29T23:32:02.980767674Z" level=info msg="CreateContainer within sandbox \"0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.634 [INFO][4525] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.655 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0 calico-apiserver-7c69cfcf75- calico-apiserver 145c01ef-4eaa-44de-b6fe-4efc0106e77d 847 0 2025-10-29 23:31:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c69cfcf75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c69cfcf75-tbp7d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif395810f1c8 [] [] }} ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Namespace="calico-apiserver" Pod="calico-apiserver-7c69cfcf75-tbp7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.655 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Namespace="calico-apiserver" Pod="calico-apiserver-7c69cfcf75-tbp7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.703 [INFO][4568] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" HandleID="k8s-pod-network.95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Workload="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.703 [INFO][4568] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" HandleID="k8s-pod-network.95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Workload="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c69cfcf75-tbp7d", "timestamp":"2025-10-29 23:32:02.703701361 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.703 [INFO][4568] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.818 [INFO][4568] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.819 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.887 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" host="localhost" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.912 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.919 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.921 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.924 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.924 [INFO][4568] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" host="localhost" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.925 [INFO][4568] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842 Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.935 [INFO][4568] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" host="localhost" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.957 [INFO][4568] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" host="localhost" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.957 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" host="localhost" Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.957 [INFO][4568] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:02.982399 containerd[1597]: 2025-10-29 23:32:02.957 [INFO][4568] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" HandleID="k8s-pod-network.95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Workload="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" Oct 29 23:32:02.983168 containerd[1597]: 2025-10-29 23:32:02.959 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Namespace="calico-apiserver" Pod="calico-apiserver-7c69cfcf75-tbp7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0", GenerateName:"calico-apiserver-7c69cfcf75-", Namespace:"calico-apiserver", SelfLink:"", UID:"145c01ef-4eaa-44de-b6fe-4efc0106e77d", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c69cfcf75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c69cfcf75-tbp7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif395810f1c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:02.983168 containerd[1597]: 2025-10-29 23:32:02.959 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Namespace="calico-apiserver" Pod="calico-apiserver-7c69cfcf75-tbp7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" Oct 29 23:32:02.983168 containerd[1597]: 2025-10-29 23:32:02.959 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif395810f1c8 ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Namespace="calico-apiserver" Pod="calico-apiserver-7c69cfcf75-tbp7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" Oct 29 23:32:02.983168 containerd[1597]: 2025-10-29 23:32:02.960 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Namespace="calico-apiserver" Pod="calico-apiserver-7c69cfcf75-tbp7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" Oct 29 23:32:02.983168 containerd[1597]: 2025-10-29 23:32:02.960 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Namespace="calico-apiserver" Pod="calico-apiserver-7c69cfcf75-tbp7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0", GenerateName:"calico-apiserver-7c69cfcf75-", Namespace:"calico-apiserver", SelfLink:"", UID:"145c01ef-4eaa-44de-b6fe-4efc0106e77d", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c69cfcf75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842", Pod:"calico-apiserver-7c69cfcf75-tbp7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif395810f1c8", MAC:"2e:b9:40:99:c8:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:02.983168 containerd[1597]: 2025-10-29 23:32:02.977 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" Namespace="calico-apiserver" Pod="calico-apiserver-7c69cfcf75-tbp7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c69cfcf75--tbp7d-eth0" Oct 29 23:32:02.996033 containerd[1597]: time="2025-10-29T23:32:02.995663388Z" level=info msg="Container e777ae35f9a83063b7729f90a8fae9e6902cfdd86c246fcd96f950785adee474: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:03.005203 containerd[1597]: time="2025-10-29T23:32:03.005156212Z" level=info msg="connecting to shim 95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842" address="unix:///run/containerd/s/75b612becb5c7d67d6afa1358a787f84162b9b26c258df9be621ff759dbb842f" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:03.014033 containerd[1597]: time="2025-10-29T23:32:03.013520247Z" level=info msg="CreateContainer within sandbox \"0848d7e9a2d822afb16b53fb5bd3a11a993883287bc94f0bda7cfe1206931705\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e777ae35f9a83063b7729f90a8fae9e6902cfdd86c246fcd96f950785adee474\"" Oct 29 23:32:03.014871 containerd[1597]: time="2025-10-29T23:32:03.014848598Z" level=info msg="StartContainer for \"e777ae35f9a83063b7729f90a8fae9e6902cfdd86c246fcd96f950785adee474\"" Oct 29 23:32:03.015819 containerd[1597]: time="2025-10-29T23:32:03.015788100Z" level=info msg="connecting to shim e777ae35f9a83063b7729f90a8fae9e6902cfdd86c246fcd96f950785adee474" address="unix:///run/containerd/s/8c71c227ac229891b1cc2e4409d2264194d6374d13e0c6c08e255f6888a07840" protocol=ttrpc version=3 Oct 29 23:32:03.028179 systemd[1]: Started cri-containerd-95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842.scope - libcontainer container 95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842. Oct 29 23:32:03.033600 systemd[1]: Started cri-containerd-e777ae35f9a83063b7729f90a8fae9e6902cfdd86c246fcd96f950785adee474.scope - libcontainer container e777ae35f9a83063b7729f90a8fae9e6902cfdd86c246fcd96f950785adee474. Oct 29 23:32:03.040581 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:03.052301 containerd[1597]: time="2025-10-29T23:32:03.052263229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:03.053186 containerd[1597]: time="2025-10-29T23:32:03.053150370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 23:32:03.053186 containerd[1597]: time="2025-10-29T23:32:03.053211812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 23:32:03.053409 kubelet[2753]: E1029 23:32:03.053360 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:32:03.053409 kubelet[2753]: E1029 23:32:03.053420 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:32:03.053512 kubelet[2753]: E1029 23:32:03.053492 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65fc69cc4d-mslrm_calico-system(17b6b460-22bb-4f11-b0a1-b884ed538e3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:03.053608 kubelet[2753]: E1029 23:32:03.053531 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65fc69cc4d-mslrm" podUID="17b6b460-22bb-4f11-b0a1-b884ed538e3c" Oct 29 23:32:03.064093 containerd[1597]: time="2025-10-29T23:32:03.063931381Z" level=info msg="StartContainer for \"e777ae35f9a83063b7729f90a8fae9e6902cfdd86c246fcd96f950785adee474\" returns successfully" Oct 29 23:32:03.071770 containerd[1597]: time="2025-10-29T23:32:03.071673682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c69cfcf75-tbp7d,Uid:145c01ef-4eaa-44de-b6fe-4efc0106e77d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"95eadabae5f860bd956fa425333cf574b1d6535eabd31b5cda1369b3b3794842\"" Oct 29 23:32:03.073617 containerd[1597]: time="2025-10-29T23:32:03.073587366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:03.302808 containerd[1597]: time="2025-10-29T23:32:03.302657184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:03.303716 containerd[1597]: time="2025-10-29T23:32:03.303670327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:03.303779 containerd[1597]: time="2025-10-29T23:32:03.303670207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:03.304016 kubelet[2753]: E1029 23:32:03.303952 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:03.304072 kubelet[2753]: E1029 23:32:03.304017 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:03.304127 kubelet[2753]: E1029 23:32:03.304103 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c69cfcf75-tbp7d_calico-apiserver(145c01ef-4eaa-44de-b6fe-4efc0106e77d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:03.304160 kubelet[2753]: E1029 23:32:03.304140 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c69cfcf75-tbp7d" podUID="145c01ef-4eaa-44de-b6fe-4efc0106e77d" Oct 29 23:32:03.363703 kubelet[2753]: I1029 23:32:03.363620 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 23:32:03.365018 kubelet[2753]: E1029 23:32:03.364868 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:03.563862 kubelet[2753]: E1029 23:32:03.563400 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:03.564322 containerd[1597]: time="2025-10-29T23:32:03.563301817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2hkw,Uid:10144807-b663-4cb3-833e-3110eaa2d568,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:03.564322 containerd[1597]: time="2025-10-29T23:32:03.564134476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ds4w8,Uid:7b87c64e-f652-434c-854d-97f667ea0aaa,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:03.575000 containerd[1597]: time="2025-10-29T23:32:03.574165030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-7tj9t,Uid:d94c374e-545a-4366-ade5-85d4ae3cccd1,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:03.725849 kubelet[2753]: E1029 23:32:03.725777 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c69cfcf75-tbp7d" podUID="145c01ef-4eaa-44de-b6fe-4efc0106e77d" Oct 29 23:32:03.731655 kubelet[2753]: E1029 23:32:03.731316 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:03.740645 kubelet[2753]: E1029 23:32:03.740117 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:03.742190 kubelet[2753]: E1029 23:32:03.742156 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65fc69cc4d-mslrm" podUID="17b6b460-22bb-4f11-b0a1-b884ed538e3c" Oct 29 23:32:03.829098 systemd-networkd[1489]: calide3388e4fa2: Link UP Oct 29 23:32:03.829951 systemd-networkd[1489]: calide3388e4fa2: Gained carrier Oct 29 23:32:03.840008 kubelet[2753]: I1029 23:32:03.839641 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nqpkr" podStartSLOduration=39.839622415 podStartE2EDuration="39.839622415s" podCreationTimestamp="2025-10-29 23:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:32:03.778727276 +0000 UTC m=+46.316304542" watchObservedRunningTime="2025-10-29 23:32:03.839622415 +0000 UTC m=+46.377199681" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.665 [INFO][4799] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.689 [INFO][4799] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--7tj9t-eth0 goldmane-7c778bb748- calico-system d94c374e-545a-4366-ade5-85d4ae3cccd1 849 0 2025-10-29 23:31:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-7tj9t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calide3388e4fa2 [] [] }} ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Namespace="calico-system" Pod="goldmane-7c778bb748-7tj9t" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7tj9t-" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.689 [INFO][4799] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Namespace="calico-system" Pod="goldmane-7c778bb748-7tj9t" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.739 [INFO][4845] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" HandleID="k8s-pod-network.33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Workload="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.740 [INFO][4845] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" HandleID="k8s-pod-network.33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Workload="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dcfd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-7tj9t", "timestamp":"2025-10-29 23:32:03.73986025 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.740 [INFO][4845] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.740 [INFO][4845] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.740 [INFO][4845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.762 [INFO][4845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" host="localhost" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.778 [INFO][4845] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.792 [INFO][4845] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.797 [INFO][4845] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.802 [INFO][4845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.803 [INFO][4845] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" host="localhost" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.804 [INFO][4845] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039 Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.811 [INFO][4845] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" host="localhost" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.818 [INFO][4845] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" host="localhost" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.819 [INFO][4845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" host="localhost" Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.820 [INFO][4845] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:03.842885 containerd[1597]: 2025-10-29 23:32:03.820 [INFO][4845] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" HandleID="k8s-pod-network.33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Workload="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" Oct 29 23:32:03.844572 containerd[1597]: 2025-10-29 23:32:03.824 [INFO][4799] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Namespace="calico-system" Pod="goldmane-7c778bb748-7tj9t" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--7tj9t-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d94c374e-545a-4366-ade5-85d4ae3cccd1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-7tj9t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide3388e4fa2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:03.844572 containerd[1597]: 2025-10-29 23:32:03.825 [INFO][4799] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Namespace="calico-system" Pod="goldmane-7c778bb748-7tj9t" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" Oct 29 23:32:03.844572 containerd[1597]: 2025-10-29 23:32:03.825 [INFO][4799] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide3388e4fa2 ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Namespace="calico-system" Pod="goldmane-7c778bb748-7tj9t" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" Oct 29 23:32:03.844572 containerd[1597]: 2025-10-29 23:32:03.830 [INFO][4799] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Namespace="calico-system" Pod="goldmane-7c778bb748-7tj9t" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" Oct 29 23:32:03.844572 containerd[1597]: 2025-10-29 23:32:03.831 [INFO][4799] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Namespace="calico-system" Pod="goldmane-7c778bb748-7tj9t" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--7tj9t-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d94c374e-545a-4366-ade5-85d4ae3cccd1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039", Pod:"goldmane-7c778bb748-7tj9t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide3388e4fa2", MAC:"8a:fb:3c:d6:f8:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:03.844572 containerd[1597]: 2025-10-29 23:32:03.839 [INFO][4799] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" Namespace="calico-system" Pod="goldmane-7c778bb748-7tj9t" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7tj9t-eth0" Oct 29 23:32:03.872374 containerd[1597]: time="2025-10-29T23:32:03.872316897Z" level=info msg="connecting to shim 33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039" address="unix:///run/containerd/s/c83541cf3efdb686e4534fdfb2509466d4d27535cc14e1b043b4e90b9e650c9d" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:03.923698 systemd[1]: Started cri-containerd-33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039.scope - libcontainer container 33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039. Oct 29 23:32:03.929286 systemd-networkd[1489]: cali3cb5dae4705: Link UP Oct 29 23:32:03.931197 systemd-networkd[1489]: cali3cb5dae4705: Gained carrier Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.629 [INFO][4776] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.656 [INFO][4776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h2hkw-eth0 csi-node-driver- calico-system 10144807-b663-4cb3-833e-3110eaa2d568 744 0 2025-10-29 23:31:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h2hkw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3cb5dae4705 [] [] }} ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Namespace="calico-system" Pod="csi-node-driver-h2hkw" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2hkw-" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.657 [INFO][4776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Namespace="calico-system" Pod="csi-node-driver-h2hkw" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2hkw-eth0" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.756 [INFO][4830] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" HandleID="k8s-pod-network.055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Workload="localhost-k8s-csi--node--driver--h2hkw-eth0" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.765 [INFO][4830] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" HandleID="k8s-pod-network.055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Workload="localhost-k8s-csi--node--driver--h2hkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000438520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h2hkw", "timestamp":"2025-10-29 23:32:03.75656032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.766 [INFO][4830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.820 [INFO][4830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.820 [INFO][4830] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.861 [INFO][4830] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" host="localhost" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.875 [INFO][4830] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.891 [INFO][4830] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.895 [INFO][4830] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.899 [INFO][4830] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.900 [INFO][4830] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" host="localhost" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.903 [INFO][4830] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765 Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.910 [INFO][4830] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" host="localhost" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.919 [INFO][4830] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" host="localhost" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.919 [INFO][4830] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" host="localhost" Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.919 [INFO][4830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:03.948615 containerd[1597]: 2025-10-29 23:32:03.919 [INFO][4830] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" HandleID="k8s-pod-network.055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Workload="localhost-k8s-csi--node--driver--h2hkw-eth0" Oct 29 23:32:03.949241 containerd[1597]: 2025-10-29 23:32:03.923 [INFO][4776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Namespace="calico-system" Pod="csi-node-driver-h2hkw" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2hkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h2hkw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10144807-b663-4cb3-833e-3110eaa2d568", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h2hkw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3cb5dae4705", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:03.949241 containerd[1597]: 2025-10-29 23:32:03.923 [INFO][4776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Namespace="calico-system" Pod="csi-node-driver-h2hkw" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2hkw-eth0" Oct 29 23:32:03.949241 containerd[1597]: 2025-10-29 23:32:03.923 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cb5dae4705 ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Namespace="calico-system" Pod="csi-node-driver-h2hkw" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2hkw-eth0" Oct 29 23:32:03.949241 containerd[1597]: 2025-10-29 23:32:03.933 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Namespace="calico-system" Pod="csi-node-driver-h2hkw" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2hkw-eth0" Oct 29 23:32:03.949241 containerd[1597]: 2025-10-29 23:32:03.934 [INFO][4776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Namespace="calico-system" Pod="csi-node-driver-h2hkw" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2hkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h2hkw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10144807-b663-4cb3-833e-3110eaa2d568", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765", Pod:"csi-node-driver-h2hkw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3cb5dae4705", MAC:"a6:58:39:d9:60:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:03.949241 containerd[1597]: 2025-10-29 23:32:03.945 [INFO][4776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" Namespace="calico-system" Pod="csi-node-driver-h2hkw" WorkloadEndpoint="localhost-k8s-csi--node--driver--h2hkw-eth0" Oct 29 23:32:03.956920 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:03.972728 containerd[1597]: time="2025-10-29T23:32:03.972040260Z" level=info msg="connecting to shim 055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765" address="unix:///run/containerd/s/8fe766ab760ebe19b66e25d246eab2bf64dd5906ab7f271ab6f25fcc70849177" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:03.982016 containerd[1597]: time="2025-10-29T23:32:03.981954931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-7tj9t,Uid:d94c374e-545a-4366-ade5-85d4ae3cccd1,Namespace:calico-system,Attempt:0,} returns sandbox id \"33f019ec33f8a3441972ecee862577e570ceb83fc02fcce3c4f754b1a56ca039\"" Oct 29 23:32:03.983403 containerd[1597]: time="2025-10-29T23:32:03.983371884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 23:32:04.000195 systemd[1]: Started cri-containerd-055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765.scope - libcontainer container 055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765. Oct 29 23:32:04.015528 systemd-networkd[1489]: cali52ea6f322ae: Link UP Oct 29 23:32:04.016437 systemd-networkd[1489]: cali52ea6f322ae: Gained carrier Oct 29 23:32:04.027128 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.640 [INFO][4788] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.665 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--ds4w8-eth0 coredns-66bc5c9577- kube-system 7b87c64e-f652-434c-854d-97f667ea0aaa 846 0 2025-10-29 23:31:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-ds4w8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali52ea6f322ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Namespace="kube-system" Pod="coredns-66bc5c9577-ds4w8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ds4w8-" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.665 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Namespace="kube-system" Pod="coredns-66bc5c9577-ds4w8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.772 [INFO][4836] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" HandleID="k8s-pod-network.1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Workload="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.772 [INFO][4836] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" HandleID="k8s-pod-network.1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Workload="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000510c70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-ds4w8", "timestamp":"2025-10-29 23:32:03.772266526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.772 [INFO][4836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.919 [INFO][4836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.919 [INFO][4836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.962 [INFO][4836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" host="localhost" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.977 [INFO][4836] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.992 [INFO][4836] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.995 [INFO][4836] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.997 [INFO][4836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.997 [INFO][4836] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" host="localhost" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:03.999 [INFO][4836] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501 Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:04.003 [INFO][4836] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" host="localhost" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:04.010 [INFO][4836] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" host="localhost" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:04.011 [INFO][4836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" host="localhost" Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:04.011 [INFO][4836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:04.038516 containerd[1597]: 2025-10-29 23:32:04.011 [INFO][4836] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" HandleID="k8s-pod-network.1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Workload="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" Oct 29 23:32:04.039058 containerd[1597]: 2025-10-29 23:32:04.013 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Namespace="kube-system" Pod="coredns-66bc5c9577-ds4w8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ds4w8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7b87c64e-f652-434c-854d-97f667ea0aaa", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-ds4w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52ea6f322ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:04.039058 containerd[1597]: 2025-10-29 23:32:04.013 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Namespace="kube-system" Pod="coredns-66bc5c9577-ds4w8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" Oct 29 23:32:04.039058 containerd[1597]: 2025-10-29 23:32:04.013 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52ea6f322ae ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Namespace="kube-system" Pod="coredns-66bc5c9577-ds4w8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" Oct 29 23:32:04.039058 containerd[1597]: 2025-10-29 23:32:04.015 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Namespace="kube-system" Pod="coredns-66bc5c9577-ds4w8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" Oct 29 23:32:04.039058 containerd[1597]: 2025-10-29 23:32:04.016 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Namespace="kube-system" Pod="coredns-66bc5c9577-ds4w8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ds4w8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7b87c64e-f652-434c-854d-97f667ea0aaa", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501", Pod:"coredns-66bc5c9577-ds4w8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52ea6f322ae", MAC:"f6:94:0b:40:80:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:04.039058 containerd[1597]: 2025-10-29 23:32:04.035 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" Namespace="kube-system" Pod="coredns-66bc5c9577-ds4w8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ds4w8-eth0" Oct 29 23:32:04.043907 containerd[1597]: time="2025-10-29T23:32:04.043872234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h2hkw,Uid:10144807-b663-4cb3-833e-3110eaa2d568,Namespace:calico-system,Attempt:0,} returns sandbox id \"055fa5504c975c30b6eb7e08496ad4eeb9e0ec7e7e979e36687b75add4fb5765\"" Oct 29 23:32:04.066215 containerd[1597]: time="2025-10-29T23:32:04.066164863Z" level=info msg="connecting to shim 1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501" address="unix:///run/containerd/s/26ab21cd7253da780a530de99416a26dcd3941ce8b91c02af6d382a7c41ea7b2" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:04.094243 systemd[1]: Started cri-containerd-1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501.scope - libcontainer container 1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501. Oct 29 23:32:04.106248 systemd-resolved[1270]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:04.129735 containerd[1597]: time="2025-10-29T23:32:04.129694994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ds4w8,Uid:7b87c64e-f652-434c-854d-97f667ea0aaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501\"" Oct 29 23:32:04.130466 kubelet[2753]: E1029 23:32:04.130443 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:04.135904 containerd[1597]: time="2025-10-29T23:32:04.135871855Z" level=info msg="CreateContainer within sandbox \"1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 23:32:04.144360 containerd[1597]: time="2025-10-29T23:32:04.144314968Z" level=info msg="Container 15cd3834d144234bf781956bfc3efcb3a08fd75bd27c893f5493eedbe9ebdac3: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:04.148898 containerd[1597]: time="2025-10-29T23:32:04.148859952Z" level=info msg="CreateContainer within sandbox \"1b5f60a1172bd52fa7b84c22f295b37b172319dc714dfb033f2fffb5fe8b3501\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15cd3834d144234bf781956bfc3efcb3a08fd75bd27c893f5493eedbe9ebdac3\"" Oct 29 23:32:04.149318 containerd[1597]: time="2025-10-29T23:32:04.149259921Z" level=info msg="StartContainer for \"15cd3834d144234bf781956bfc3efcb3a08fd75bd27c893f5493eedbe9ebdac3\"" Oct 29 23:32:04.150156 containerd[1597]: time="2025-10-29T23:32:04.150090620Z" level=info msg="connecting to shim 15cd3834d144234bf781956bfc3efcb3a08fd75bd27c893f5493eedbe9ebdac3" address="unix:///run/containerd/s/26ab21cd7253da780a530de99416a26dcd3941ce8b91c02af6d382a7c41ea7b2" protocol=ttrpc version=3 Oct 29 23:32:04.168166 systemd[1]: Started cri-containerd-15cd3834d144234bf781956bfc3efcb3a08fd75bd27c893f5493eedbe9ebdac3.scope - libcontainer container 15cd3834d144234bf781956bfc3efcb3a08fd75bd27c893f5493eedbe9ebdac3. Oct 29 23:32:04.181129 systemd-networkd[1489]: calif395810f1c8: Gained IPv6LL Oct 29 23:32:04.192448 containerd[1597]: time="2025-10-29T23:32:04.192357185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:04.192994 containerd[1597]: time="2025-10-29T23:32:04.192950358Z" level=info msg="StartContainer for \"15cd3834d144234bf781956bfc3efcb3a08fd75bd27c893f5493eedbe9ebdac3\" returns successfully" Oct 29 23:32:04.193449 containerd[1597]: time="2025-10-29T23:32:04.193417969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 23:32:04.193563 containerd[1597]: time="2025-10-29T23:32:04.193492571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:04.193706 kubelet[2753]: E1029 23:32:04.193676 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:04.193776 kubelet[2753]: E1029 23:32:04.193712 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:04.193883 kubelet[2753]: E1029 23:32:04.193857 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-7tj9t_calico-system(d94c374e-545a-4366-ade5-85d4ae3cccd1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:04.193941 kubelet[2753]: E1029 23:32:04.193909 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7tj9t" podUID="d94c374e-545a-4366-ade5-85d4ae3cccd1" Oct 29 23:32:04.194041 containerd[1597]: time="2025-10-29T23:32:04.194016143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 23:32:04.245156 systemd-networkd[1489]: cali22f43f35459: Gained IPv6LL Oct 29 23:32:04.338840 kubelet[2753]: I1029 23:32:04.338791 2753 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 23:32:04.339689 kubelet[2753]: E1029 23:32:04.339332 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:04.405900 containerd[1597]: time="2025-10-29T23:32:04.405864540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:04.407226 containerd[1597]: time="2025-10-29T23:32:04.407188091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 23:32:04.407406 containerd[1597]: time="2025-10-29T23:32:04.407327094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 23:32:04.407636 kubelet[2753]: E1029 23:32:04.407576 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:32:04.407689 kubelet[2753]: E1029 23:32:04.407639 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:32:04.407764 kubelet[2753]: E1029 23:32:04.407734 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-h2hkw_calico-system(10144807-b663-4cb3-833e-3110eaa2d568): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:04.408819 containerd[1597]: time="2025-10-29T23:32:04.408790967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 23:32:04.472346 containerd[1597]: time="2025-10-29T23:32:04.472216416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaeb9ed9ac7cbbb6def997192e4af6462cddfe0691d870fd4197f87f77092e9b\" id:\"0988db95baef074474e7fff4854f545845b87daab90c0e700b2b5231857e468d\" pid:5101 exit_status:1 exited_at:{seconds:1761780724 nanos:470871865}" Oct 29 23:32:04.527659 systemd-networkd[1489]: vxlan.calico: Link UP Oct 29 23:32:04.527665 systemd-networkd[1489]: vxlan.calico: Gained carrier Oct 29 23:32:04.588055 containerd[1597]: time="2025-10-29T23:32:04.588011140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaeb9ed9ac7cbbb6def997192e4af6462cddfe0691d870fd4197f87f77092e9b\" id:\"3421236b2009dd32e5e369945e508b51abe6f4ae59a518a1da4d93d5feff50da\" pid:5144 exit_status:1 exited_at:{seconds:1761780724 nanos:583679521}" Oct 29 23:32:04.625630 containerd[1597]: time="2025-10-29T23:32:04.625571038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:04.638882 containerd[1597]: time="2025-10-29T23:32:04.638801420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 23:32:04.638882 containerd[1597]: time="2025-10-29T23:32:04.638842861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 23:32:04.639115 kubelet[2753]: E1029 23:32:04.639031 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:32:04.639115 kubelet[2753]: E1029 23:32:04.639080 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:32:04.639387 kubelet[2753]: E1029 23:32:04.639158 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-h2hkw_calico-system(10144807-b663-4cb3-833e-3110eaa2d568): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:04.639387 kubelet[2753]: E1029 23:32:04.639203 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568" Oct 29 23:32:04.745533 kubelet[2753]: E1029 23:32:04.745371 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568" Oct 29 23:32:04.752427 kubelet[2753]: E1029 23:32:04.752370 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:04.754278 kubelet[2753]: E1029 23:32:04.754127 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:04.756549 kubelet[2753]: E1029 23:32:04.755872 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65fc69cc4d-mslrm" podUID="17b6b460-22bb-4f11-b0a1-b884ed538e3c" Oct 29 23:32:04.756549 kubelet[2753]: E1029 23:32:04.756491 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7tj9t" podUID="d94c374e-545a-4366-ade5-85d4ae3cccd1" Oct 29 23:32:04.756549 kubelet[2753]: E1029 23:32:04.756486 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c69cfcf75-tbp7d" podUID="145c01ef-4eaa-44de-b6fe-4efc0106e77d" Oct 29 23:32:04.806731 kubelet[2753]: I1029 23:32:04.806657 2753 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ds4w8" podStartSLOduration=40.806640693 podStartE2EDuration="40.806640693s" podCreationTimestamp="2025-10-29 23:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:32:04.806047719 +0000 UTC m=+47.343624985" watchObservedRunningTime="2025-10-29 23:32:04.806640693 +0000 UTC m=+47.344217959" Oct 29 23:32:04.822622 systemd-networkd[1489]: cali1333e153e22: Gained IPv6LL Oct 29 23:32:04.886115 systemd-networkd[1489]: calide3388e4fa2: Gained IPv6LL Oct 29 23:32:05.269119 systemd-networkd[1489]: cali52ea6f322ae: Gained IPv6LL Oct 29 23:32:05.397164 systemd-networkd[1489]: cali3cb5dae4705: Gained IPv6LL Oct 29 23:32:05.529938 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:47804.service - OpenSSH per-connection server daemon (10.0.0.1:47804). Oct 29 23:32:05.595147 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 47804 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:05.596610 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:05.600456 systemd-logind[1573]: New session 10 of user core. Oct 29 23:32:05.608158 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 23:32:05.758542 kubelet[2753]: E1029 23:32:05.757587 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7tj9t" podUID="d94c374e-545a-4366-ade5-85d4ae3cccd1" Oct 29 23:32:05.758542 kubelet[2753]: E1029 23:32:05.758051 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568" Oct 29 23:32:05.758951 kubelet[2753]: E1029 23:32:05.758708 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:05.770784 sshd[5236]: Connection closed by 10.0.0.1 port 47804 Oct 29 23:32:05.772557 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:05.784139 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:47804.service: Deactivated successfully. Oct 29 23:32:05.786551 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 23:32:05.787541 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Oct 29 23:32:05.790519 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:47818.service - OpenSSH per-connection server daemon (10.0.0.1:47818). Oct 29 23:32:05.793177 systemd-logind[1573]: Removed session 10. Oct 29 23:32:05.842776 sshd[5251]: Accepted publickey for core from 10.0.0.1 port 47818 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:05.843901 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:05.847745 systemd-logind[1573]: New session 11 of user core. Oct 29 23:32:05.856189 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 23:32:05.974217 systemd-networkd[1489]: vxlan.calico: Gained IPv6LL Oct 29 23:32:06.041926 sshd[5254]: Connection closed by 10.0.0.1 port 47818 Oct 29 23:32:06.041824 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:06.052627 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:47818.service: Deactivated successfully. Oct 29 23:32:06.054951 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 23:32:06.055718 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Oct 29 23:32:06.061413 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:47820.service - OpenSSH per-connection server daemon (10.0.0.1:47820). Oct 29 23:32:06.062555 systemd-logind[1573]: Removed session 11. Oct 29 23:32:06.129211 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 47820 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:06.131785 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:06.139255 systemd-logind[1573]: New session 12 of user core. Oct 29 23:32:06.148191 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 23:32:06.347380 sshd[5270]: Connection closed by 10.0.0.1 port 47820 Oct 29 23:32:06.348171 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:06.352205 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Oct 29 23:32:06.352544 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:47820.service: Deactivated successfully. Oct 29 23:32:06.356560 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 23:32:06.358356 systemd-logind[1573]: Removed session 12. Oct 29 23:32:06.759218 kubelet[2753]: E1029 23:32:06.759192 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:07.561320 containerd[1597]: time="2025-10-29T23:32:07.561282554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 23:32:07.804858 containerd[1597]: time="2025-10-29T23:32:07.804798656Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:07.806028 containerd[1597]: time="2025-10-29T23:32:07.805960042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 23:32:07.806100 containerd[1597]: time="2025-10-29T23:32:07.806031323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 23:32:07.806268 kubelet[2753]: E1029 23:32:07.806188 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:07.806268 kubelet[2753]: E1029 23:32:07.806236 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:07.807137 kubelet[2753]: E1029 23:32:07.806311 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85b589694d-qs7sp_calico-system(5e141988-8039-43c5-b128-4c31fe9414d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:07.807715 containerd[1597]: time="2025-10-29T23:32:07.807667478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 23:32:08.012572 containerd[1597]: time="2025-10-29T23:32:08.012526702Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:08.013522 containerd[1597]: time="2025-10-29T23:32:08.013484122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 23:32:08.013624 containerd[1597]: time="2025-10-29T23:32:08.013507522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 23:32:08.013741 kubelet[2753]: E1029 23:32:08.013695 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:08.013784 kubelet[2753]: E1029 23:32:08.013751 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:08.013845 kubelet[2753]: E1029 23:32:08.013828 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85b589694d-qs7sp_calico-system(5e141988-8039-43c5-b128-4c31fe9414d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:08.013894 kubelet[2753]: E1029 23:32:08.013870 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85b589694d-qs7sp" podUID="5e141988-8039-43c5-b128-4c31fe9414d8" Oct 29 23:32:11.363912 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:50888.service - OpenSSH per-connection server daemon (10.0.0.1:50888). Oct 29 23:32:11.426512 sshd[5292]: Accepted publickey for core from 10.0.0.1 port 50888 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:11.428457 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:11.434513 systemd-logind[1573]: New session 13 of user core. Oct 29 23:32:11.441156 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 23:32:11.598166 sshd[5295]: Connection closed by 10.0.0.1 port 50888 Oct 29 23:32:11.598636 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:11.607065 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:50888.service: Deactivated successfully. Oct 29 23:32:11.608739 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 23:32:11.609487 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Oct 29 23:32:11.612095 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:50896.service - OpenSSH per-connection server daemon (10.0.0.1:50896). Oct 29 23:32:11.614799 systemd-logind[1573]: Removed session 13. Oct 29 23:32:11.683711 sshd[5309]: Accepted publickey for core from 10.0.0.1 port 50896 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:11.685233 sshd-session[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:11.690412 systemd-logind[1573]: New session 14 of user core. Oct 29 23:32:11.699123 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 23:32:11.906908 sshd[5312]: Connection closed by 10.0.0.1 port 50896 Oct 29 23:32:11.907752 sshd-session[5309]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:11.916528 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:50896.service: Deactivated successfully. Oct 29 23:32:11.919605 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 23:32:11.921067 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Oct 29 23:32:11.922710 systemd-logind[1573]: Removed session 14. Oct 29 23:32:11.924410 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:50906.service - OpenSSH per-connection server daemon (10.0.0.1:50906). Oct 29 23:32:11.983532 sshd[5324]: Accepted publickey for core from 10.0.0.1 port 50906 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:11.984674 sshd-session[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:11.988658 systemd-logind[1573]: New session 15 of user core. Oct 29 23:32:11.996147 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 23:32:12.556111 sshd[5327]: Connection closed by 10.0.0.1 port 50906 Oct 29 23:32:12.556882 sshd-session[5324]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:12.564949 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:50906.service: Deactivated successfully. Oct 29 23:32:12.569891 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 23:32:12.571138 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Oct 29 23:32:12.576040 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:50918.service - OpenSSH per-connection server daemon (10.0.0.1:50918). Oct 29 23:32:12.576759 systemd-logind[1573]: Removed session 15. Oct 29 23:32:12.632635 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 50918 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:12.633893 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:12.637845 systemd-logind[1573]: New session 16 of user core. Oct 29 23:32:12.646129 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 23:32:12.933052 sshd[5348]: Connection closed by 10.0.0.1 port 50918 Oct 29 23:32:12.933593 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:12.942633 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:50918.service: Deactivated successfully. Oct 29 23:32:12.945313 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 23:32:12.948439 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Oct 29 23:32:12.951864 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:50922.service - OpenSSH per-connection server daemon (10.0.0.1:50922). Oct 29 23:32:12.953620 systemd-logind[1573]: Removed session 16. Oct 29 23:32:13.005303 sshd[5360]: Accepted publickey for core from 10.0.0.1 port 50922 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:13.006640 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:13.013855 systemd-logind[1573]: New session 17 of user core. Oct 29 23:32:13.022146 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 23:32:13.160424 sshd[5363]: Connection closed by 10.0.0.1 port 50922 Oct 29 23:32:13.160789 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:13.165755 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:50922.service: Deactivated successfully. Oct 29 23:32:13.167702 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 23:32:13.168540 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Oct 29 23:32:13.170486 systemd-logind[1573]: Removed session 17. Oct 29 23:32:15.562161 containerd[1597]: time="2025-10-29T23:32:15.562077568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:15.787877 containerd[1597]: time="2025-10-29T23:32:15.787828405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:15.812040 containerd[1597]: time="2025-10-29T23:32:15.811901110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:15.812040 containerd[1597]: time="2025-10-29T23:32:15.811990871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:15.812446 kubelet[2753]: E1029 23:32:15.812173 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:15.812446 kubelet[2753]: E1029 23:32:15.812238 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:15.812446 kubelet[2753]: E1029 23:32:15.812407 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79bdd98655-7jtgs_calico-apiserver(6412d168-75ee-476d-8869-8306cb9ca8b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:15.812922 kubelet[2753]: E1029 23:32:15.812442 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" podUID="6412d168-75ee-476d-8869-8306cb9ca8b0" Oct 29 23:32:16.559474 containerd[1597]: time="2025-10-29T23:32:16.559411896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 23:32:16.780033 containerd[1597]: time="2025-10-29T23:32:16.779960545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:16.781620 containerd[1597]: time="2025-10-29T23:32:16.781565496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 23:32:16.781697 containerd[1597]: time="2025-10-29T23:32:16.781653498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 23:32:16.781913 kubelet[2753]: E1029 23:32:16.781850 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:32:16.781969 kubelet[2753]: E1029 23:32:16.781911 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:32:16.782164 kubelet[2753]: E1029 23:32:16.782130 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65fc69cc4d-mslrm_calico-system(17b6b460-22bb-4f11-b0a1-b884ed538e3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:16.782232 kubelet[2753]: E1029 23:32:16.782169 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65fc69cc4d-mslrm" podUID="17b6b460-22bb-4f11-b0a1-b884ed538e3c" Oct 29 23:32:16.782266 containerd[1597]: time="2025-10-29T23:32:16.782220628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:17.020879 containerd[1597]: time="2025-10-29T23:32:17.020826098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:17.023350 containerd[1597]: time="2025-10-29T23:32:17.023295345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:17.023444 containerd[1597]: time="2025-10-29T23:32:17.023382786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:17.025398 kubelet[2753]: E1029 23:32:17.023672 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:17.025398 kubelet[2753]: E1029 23:32:17.023719 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:17.025398 kubelet[2753]: E1029 23:32:17.023792 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79bdd98655-v9k4n_calico-apiserver(78e681d5-73bb-40ef-a840-c0314e73e7fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:17.025398 kubelet[2753]: E1029 23:32:17.023828 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79bdd98655-v9k4n" podUID="78e681d5-73bb-40ef-a840-c0314e73e7fe" Oct 29 23:32:18.181679 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:50924.service - OpenSSH per-connection server daemon (10.0.0.1:50924). Oct 29 23:32:18.230850 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 50924 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:18.232171 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:18.236040 systemd-logind[1573]: New session 18 of user core. Oct 29 23:32:18.251155 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 23:32:18.374382 sshd[5394]: Connection closed by 10.0.0.1 port 50924 Oct 29 23:32:18.374459 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:18.378266 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Oct 29 23:32:18.378509 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:50924.service: Deactivated successfully. Oct 29 23:32:18.381360 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 23:32:18.382845 systemd-logind[1573]: Removed session 18. Oct 29 23:32:18.559147 containerd[1597]: time="2025-10-29T23:32:18.558894319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:18.560516 kubelet[2753]: E1029 23:32:18.560451 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85b589694d-qs7sp" podUID="5e141988-8039-43c5-b128-4c31fe9414d8" Oct 29 23:32:18.761830 containerd[1597]: time="2025-10-29T23:32:18.761774272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:18.762850 containerd[1597]: time="2025-10-29T23:32:18.762793571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:18.762850 containerd[1597]: time="2025-10-29T23:32:18.762874812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:18.763084 kubelet[2753]: E1029 23:32:18.763029 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:18.763084 kubelet[2753]: E1029 23:32:18.763079 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:18.763231 kubelet[2753]: E1029 23:32:18.763162 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c69cfcf75-tbp7d_calico-apiserver(145c01ef-4eaa-44de-b6fe-4efc0106e77d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:18.763231 kubelet[2753]: E1029 23:32:18.763194 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c69cfcf75-tbp7d" podUID="145c01ef-4eaa-44de-b6fe-4efc0106e77d" Oct 29 23:32:19.560308 containerd[1597]: time="2025-10-29T23:32:19.560256102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 23:32:19.763952 containerd[1597]: time="2025-10-29T23:32:19.763911273Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:19.765842 containerd[1597]: time="2025-10-29T23:32:19.765807189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 23:32:19.766020 containerd[1597]: time="2025-10-29T23:32:19.765878270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 23:32:19.766118 kubelet[2753]: E1029 23:32:19.766078 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:32:19.766457 kubelet[2753]: E1029 23:32:19.766128 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:32:19.766457 kubelet[2753]: E1029 23:32:19.766286 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-h2hkw_calico-system(10144807-b663-4cb3-833e-3110eaa2d568): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:19.766805 containerd[1597]: time="2025-10-29T23:32:19.766654484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 23:32:19.968951 containerd[1597]: time="2025-10-29T23:32:19.968891350Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:19.969971 containerd[1597]: time="2025-10-29T23:32:19.969919889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 23:32:19.970054 containerd[1597]: time="2025-10-29T23:32:19.970013291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:19.970247 kubelet[2753]: E1029 23:32:19.970172 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:19.970521 kubelet[2753]: E1029 23:32:19.970335 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:19.970595 kubelet[2753]: E1029 23:32:19.970522 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-7tj9t_calico-system(d94c374e-545a-4366-ade5-85d4ae3cccd1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:19.970595 kubelet[2753]: E1029 23:32:19.970562 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7tj9t" podUID="d94c374e-545a-4366-ade5-85d4ae3cccd1" Oct 29 23:32:19.970900 containerd[1597]: time="2025-10-29T23:32:19.970813625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 23:32:20.177558 containerd[1597]: time="2025-10-29T23:32:20.177212299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:20.178249 containerd[1597]: time="2025-10-29T23:32:20.178209317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 23:32:20.178323 containerd[1597]: time="2025-10-29T23:32:20.178237318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 23:32:20.178436 kubelet[2753]: E1029 23:32:20.178398 2753 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:32:20.178479 kubelet[2753]: E1029 23:32:20.178445 2753 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:32:20.178534 kubelet[2753]: E1029 23:32:20.178514 2753 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-h2hkw_calico-system(10144807-b663-4cb3-833e-3110eaa2d568): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:20.178592 kubelet[2753]: E1029 23:32:20.178558 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568" Oct 29 23:32:23.393577 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:54212.service - OpenSSH per-connection server daemon (10.0.0.1:54212). Oct 29 23:32:23.464477 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 54212 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:23.470882 sshd-session[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:23.483846 systemd-logind[1573]: New session 19 of user core. Oct 29 23:32:23.496193 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 23:32:23.628164 sshd[5410]: Connection closed by 10.0.0.1 port 54212 Oct 29 23:32:23.628688 sshd-session[5407]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:23.633051 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:54212.service: Deactivated successfully. Oct 29 23:32:23.639808 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 23:32:23.644298 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Oct 29 23:32:23.646549 systemd-logind[1573]: Removed session 19. Oct 29 23:32:28.651900 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:54226.service - OpenSSH per-connection server daemon (10.0.0.1:54226). Oct 29 23:32:28.712418 sshd[5437]: Accepted publickey for core from 10.0.0.1 port 54226 ssh2: RSA SHA256:4Dr3tbK61/FsUQz8dYLzShceLjKIFoQvj0rUu7yBHE4 Oct 29 23:32:28.713732 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:28.722586 systemd-logind[1573]: New session 20 of user core. Oct 29 23:32:28.737260 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 23:32:28.866305 sshd[5440]: Connection closed by 10.0.0.1 port 54226 Oct 29 23:32:28.866434 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:28.870487 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:54226.service: Deactivated successfully. Oct 29 23:32:28.873190 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 23:32:28.874003 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Oct 29 23:32:28.874918 systemd-logind[1573]: Removed session 20. Oct 29 23:32:29.558904 kubelet[2753]: E1029 23:32:29.558840 2753 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 23:32:29.560221 kubelet[2753]: E1029 23:32:29.559738 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79bdd98655-7jtgs" podUID="6412d168-75ee-476d-8869-8306cb9ca8b0" Oct 29 23:32:30.560157 kubelet[2753]: E1029 23:32:30.559954 2753 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h2hkw" podUID="10144807-b663-4cb3-833e-3110eaa2d568"