Dec 16 12:44:53.775334 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:44:53.775397 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:44:53.775409 kernel: KASLR enabled Dec 16 12:44:53.775414 kernel: efi: EFI v2.7 by EDK II Dec 16 12:44:53.775420 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 16 12:44:53.775425 kernel: random: crng init done Dec 16 12:44:53.775432 kernel: secureboot: Secure boot disabled Dec 16 12:44:53.775437 kernel: ACPI: Early table checksum verification disabled Dec 16 12:44:53.775458 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 16 12:44:53.775468 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:44:53.775474 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:44:53.775479 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:44:53.775485 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:44:53.775491 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:44:53.775498 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:44:53.775505 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:44:53.775512 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:44:53.775518 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:44:53.775523 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:44:53.775529 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:44:53.775535 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:44:53.775546 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:44:53.775552 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 16 12:44:53.775558 kernel: Zone ranges: Dec 16 12:44:53.775564 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:44:53.775571 kernel: DMA32 empty Dec 16 12:44:53.775577 kernel: Normal empty Dec 16 12:44:53.775583 kernel: Device empty Dec 16 12:44:53.775589 kernel: Movable zone start for each node Dec 16 12:44:53.775594 kernel: Early memory node ranges Dec 16 12:44:53.775600 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 16 12:44:53.775606 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 16 12:44:53.775613 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 16 12:44:53.775619 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 16 12:44:53.775625 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 16 12:44:53.775631 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 16 12:44:53.775637 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 16 12:44:53.775644 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 16 12:44:53.775650 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 16 12:44:53.775656 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 16 12:44:53.775666 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 16 12:44:53.775672 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 16 12:44:53.775679 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:44:53.775687 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:44:53.775693 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:44:53.775700 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 16 12:44:53.775706 kernel: psci: probing for conduit method from ACPI. Dec 16 12:44:53.775713 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:44:53.775719 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:44:53.775725 kernel: psci: Trusted OS migration not required Dec 16 12:44:53.775732 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:44:53.775738 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:44:53.775744 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:44:53.775752 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:44:53.775758 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:44:53.775765 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:44:53.775771 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:44:53.775777 kernel: CPU features: detected: Spectre-v4 Dec 16 12:44:53.775784 kernel: CPU features: detected: Spectre-BHB Dec 16 12:44:53.775790 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:44:53.775796 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:44:53.775802 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:44:53.775808 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:44:53.775815 kernel: alternatives: applying boot alternatives Dec 16 12:44:53.775822 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:44:53.775830 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:44:53.775836 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:44:53.775843 kernel: Fallback order for Node 0: 0 Dec 16 12:44:53.775849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:44:53.775855 kernel: Policy zone: DMA Dec 16 12:44:53.775862 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:44:53.775868 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:44:53.775874 kernel: software IO TLB: area num 4. Dec 16 12:44:53.775880 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:44:53.775887 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 16 12:44:53.775893 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:44:53.775901 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:44:53.775908 kernel: rcu: RCU event tracing is enabled. Dec 16 12:44:53.775914 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:44:53.775921 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:44:53.775928 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:44:53.775934 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:44:53.775941 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:44:53.775948 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:44:53.775954 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:44:53.775960 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:44:53.775967 kernel: GICv3: 256 SPIs implemented Dec 16 12:44:53.775974 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:44:53.775982 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:44:53.775988 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:44:53.775995 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:44:53.776001 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:44:53.776008 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:44:53.776014 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:44:53.776021 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:44:53.776027 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:44:53.776034 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:44:53.776040 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:44:53.776049 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:44:53.776060 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:44:53.776069 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:44:53.776080 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:44:53.776087 kernel: arm-pv: using stolen time PV Dec 16 12:44:53.776093 kernel: Console: colour dummy device 80x25 Dec 16 12:44:53.776100 kernel: ACPI: Core revision 20240827 Dec 16 12:44:53.776107 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:44:53.776114 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:44:53.776121 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:44:53.776127 kernel: landlock: Up and running. Dec 16 12:44:53.776135 kernel: SELinux: Initializing. Dec 16 12:44:53.776142 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:44:53.776149 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:44:53.776156 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:44:53.776163 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:44:53.776170 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:44:53.776177 kernel: Remapping and enabling EFI services. Dec 16 12:44:53.776184 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:44:53.776190 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:44:53.776203 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:44:53.776211 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:44:53.776218 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:44:53.776227 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:44:53.776234 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:44:53.776241 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:44:53.776248 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:44:53.776256 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:44:53.776264 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:44:53.776272 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:44:53.776279 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:44:53.776286 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:44:53.776293 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:44:53.776300 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:44:53.776307 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:44:53.776314 kernel: SMP: Total of 4 processors activated. Dec 16 12:44:53.776321 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:44:53.776330 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:44:53.776337 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:44:53.776344 kernel: CPU features: detected: Common not Private translations Dec 16 12:44:53.776358 kernel: CPU features: detected: CRC32 instructions Dec 16 12:44:53.776365 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:44:53.776372 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:44:53.776379 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:44:53.776399 kernel: CPU features: detected: Privileged Access Never Dec 16 12:44:53.776407 kernel: CPU features: detected: RAS Extension Support Dec 16 12:44:53.776417 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:44:53.776425 kernel: alternatives: applying system-wide alternatives Dec 16 12:44:53.776433 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:44:53.776440 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 16 12:44:53.776507 kernel: devtmpfs: initialized Dec 16 12:44:53.776515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:44:53.776523 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:44:53.776530 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:44:53.776537 kernel: 0 pages in range for non-PLT usage Dec 16 12:44:53.776546 kernel: 508400 pages in range for PLT usage Dec 16 12:44:53.776553 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:44:53.776560 kernel: SMBIOS 3.0.0 present. Dec 16 12:44:53.776567 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:44:53.776574 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:44:53.776581 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:44:53.776588 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:44:53.776596 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:44:53.776603 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:44:53.776611 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:44:53.776619 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Dec 16 12:44:53.776626 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:44:53.776633 kernel: cpuidle: using governor menu Dec 16 12:44:53.776640 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:44:53.776647 kernel: ASID allocator initialised with 32768 entries Dec 16 12:44:53.776654 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:44:53.776661 kernel: Serial: AMBA PL011 UART driver Dec 16 12:44:53.776667 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:44:53.776675 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:44:53.776682 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:44:53.776690 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:44:53.776696 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:44:53.776703 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:44:53.776711 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:44:53.776717 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:44:53.776724 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:44:53.776731 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:44:53.776738 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:44:53.776746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:44:53.776753 kernel: ACPI: Interpreter enabled Dec 16 12:44:53.776760 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:44:53.776767 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:44:53.776774 kernel: ACPI: CPU0 has been hot-added Dec 16 12:44:53.776781 kernel: ACPI: CPU1 has been hot-added Dec 16 12:44:53.776788 kernel: ACPI: CPU2 has been hot-added Dec 16 12:44:53.776795 kernel: ACPI: CPU3 has been hot-added Dec 16 12:44:53.776801 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:44:53.776810 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:44:53.776817 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:44:53.776965 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:44:53.777030 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:44:53.777088 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:44:53.777145 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:44:53.777200 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:44:53.777211 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:44:53.777218 kernel: PCI host bridge to bus 0000:00 Dec 16 12:44:53.777283 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:44:53.777336 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:44:53.777398 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:44:53.777473 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:44:53.777558 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:44:53.777628 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:44:53.777689 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:44:53.777750 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:44:53.777809 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:44:53.777868 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:44:53.777927 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:44:53.777989 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:44:53.778042 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:44:53.778095 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:44:53.778147 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:44:53.778156 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:44:53.778163 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:44:53.778170 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:44:53.778177 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:44:53.778185 kernel: iommu: Default domain type: Translated Dec 16 12:44:53.778192 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:44:53.778199 kernel: efivars: Registered efivars operations Dec 16 12:44:53.778206 kernel: vgaarb: loaded Dec 16 12:44:53.778213 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:44:53.778221 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:44:53.778228 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:44:53.778235 kernel: pnp: PnP ACPI init Dec 16 12:44:53.778300 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:44:53.778312 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:44:53.778320 kernel: NET: Registered PF_INET protocol family Dec 16 12:44:53.778327 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:44:53.778334 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:44:53.778342 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:44:53.778358 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:44:53.778366 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:44:53.778373 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:44:53.778380 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:44:53.778389 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:44:53.778396 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:44:53.778402 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:44:53.778409 kernel: kvm [1]: HYP mode not available Dec 16 12:44:53.778417 kernel: Initialise system trusted keyrings Dec 16 12:44:53.778424 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:44:53.778431 kernel: Key type asymmetric registered Dec 16 12:44:53.778438 kernel: Asymmetric key parser 'x509' registered Dec 16 12:44:53.778466 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:44:53.778475 kernel: io scheduler mq-deadline registered Dec 16 12:44:53.778482 kernel: io scheduler kyber registered Dec 16 12:44:53.778489 kernel: io scheduler bfq registered Dec 16 12:44:53.778496 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:44:53.778503 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:44:53.778510 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:44:53.778578 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:44:53.778588 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:44:53.778596 kernel: thunder_xcv, ver 1.0 Dec 16 12:44:53.778604 kernel: thunder_bgx, ver 1.0 Dec 16 12:44:53.778612 kernel: nicpf, ver 1.0 Dec 16 12:44:53.778619 kernel: nicvf, ver 1.0 Dec 16 12:44:53.778692 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:44:53.778750 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:44:53 UTC (1765889093) Dec 16 12:44:53.778759 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:44:53.778767 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:44:53.778774 kernel: watchdog: NMI not fully supported Dec 16 12:44:53.778782 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:44:53.778789 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:44:53.778796 kernel: Segment Routing with IPv6 Dec 16 12:44:53.778803 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:44:53.778810 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:44:53.778817 kernel: Key type dns_resolver registered Dec 16 12:44:53.778824 kernel: registered taskstats version 1 Dec 16 12:44:53.778831 kernel: Loading compiled-in X.509 certificates Dec 16 12:44:53.778838 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:44:53.778846 kernel: Demotion targets for Node 0: null Dec 16 12:44:53.778853 kernel: Key type .fscrypt registered Dec 16 12:44:53.778860 kernel: Key type fscrypt-provisioning registered Dec 16 12:44:53.778866 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:44:53.778873 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:44:53.778880 kernel: ima: No architecture policies found Dec 16 12:44:53.778887 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:44:53.778894 kernel: clk: Disabling unused clocks Dec 16 12:44:53.778901 kernel: PM: genpd: Disabling unused power domains Dec 16 12:44:53.778910 kernel: Warning: unable to open an initial console. Dec 16 12:44:53.778917 kernel: Freeing unused kernel memory: 39552K Dec 16 12:44:53.778924 kernel: Run /init as init process Dec 16 12:44:53.778931 kernel: with arguments: Dec 16 12:44:53.778937 kernel: /init Dec 16 12:44:53.778945 kernel: with environment: Dec 16 12:44:53.778951 kernel: HOME=/ Dec 16 12:44:53.778958 kernel: TERM=linux Dec 16 12:44:53.778967 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:44:53.778979 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:44:53.778988 systemd[1]: Detected virtualization kvm. Dec 16 12:44:53.778995 systemd[1]: Detected architecture arm64. Dec 16 12:44:53.779002 systemd[1]: Running in initrd. Dec 16 12:44:53.779010 systemd[1]: No hostname configured, using default hostname. Dec 16 12:44:53.779018 systemd[1]: Hostname set to . Dec 16 12:44:53.779026 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:44:53.779034 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:44:53.779042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:53.779050 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:53.779058 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:44:53.779066 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:44:53.779073 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:44:53.779082 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:44:53.779092 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:44:53.779100 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:44:53.779107 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:53.779115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:53.779122 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:44:53.779130 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:44:53.779137 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:44:53.779145 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:44:53.779154 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:44:53.779161 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:44:53.779169 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:44:53.779177 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:44:53.779185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:53.779193 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:53.779201 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:53.779210 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:44:53.779217 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:44:53.779241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:44:53.779249 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:44:53.779257 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:44:53.779265 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:44:53.779272 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:44:53.779280 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:44:53.779288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:53.779295 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:44:53.779305 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:53.779313 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:44:53.779336 systemd-journald[245]: Collecting audit messages is disabled. Dec 16 12:44:53.779365 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:44:53.779375 systemd-journald[245]: Journal started Dec 16 12:44:53.779393 systemd-journald[245]: Runtime Journal (/run/log/journal/12d79a2b9d8b42d3b314c977e84ce4ec) is 6M, max 48.5M, 42.4M free. Dec 16 12:44:53.773020 systemd-modules-load[246]: Inserted module 'overlay' Dec 16 12:44:53.781123 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:44:53.786461 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:44:53.787951 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 16 12:44:53.788749 kernel: Bridge firewalling registered Dec 16 12:44:53.788625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:53.791478 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:53.794899 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:44:53.796631 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:44:53.798524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:44:53.809937 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:44:53.812779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:44:53.817820 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:53.819064 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:44:53.823284 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:53.825677 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:53.828327 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:44:53.829519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:44:53.831344 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:44:53.853384 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:44:53.866914 systemd-resolved[287]: Positive Trust Anchors: Dec 16 12:44:53.866932 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:44:53.866963 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:44:53.871826 systemd-resolved[287]: Defaulting to hostname 'linux'. Dec 16 12:44:53.872825 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:44:53.875503 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:53.930458 kernel: SCSI subsystem initialized Dec 16 12:44:53.934476 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:44:53.941500 kernel: iscsi: registered transport (tcp) Dec 16 12:44:53.954474 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:44:53.954510 kernel: QLogic iSCSI HBA Driver Dec 16 12:44:53.971087 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:44:53.990217 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:53.992268 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:44:54.036403 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:44:54.038561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:44:54.096487 kernel: raid6: neonx8 gen() 15752 MB/s Dec 16 12:44:54.113468 kernel: raid6: neonx4 gen() 15786 MB/s Dec 16 12:44:54.130492 kernel: raid6: neonx2 gen() 13146 MB/s Dec 16 12:44:54.147469 kernel: raid6: neonx1 gen() 10410 MB/s Dec 16 12:44:54.164467 kernel: raid6: int64x8 gen() 6897 MB/s Dec 16 12:44:54.181480 kernel: raid6: int64x4 gen() 7352 MB/s Dec 16 12:44:54.198472 kernel: raid6: int64x2 gen() 6098 MB/s Dec 16 12:44:54.215478 kernel: raid6: int64x1 gen() 5047 MB/s Dec 16 12:44:54.215506 kernel: raid6: using algorithm neonx4 gen() 15786 MB/s Dec 16 12:44:54.232494 kernel: raid6: .... xor() 12350 MB/s, rmw enabled Dec 16 12:44:54.232538 kernel: raid6: using neon recovery algorithm Dec 16 12:44:54.237845 kernel: xor: measuring software checksum speed Dec 16 12:44:54.237899 kernel: 8regs : 20839 MB/sec Dec 16 12:44:54.238477 kernel: 32regs : 21670 MB/sec Dec 16 12:44:54.239594 kernel: arm64_neon : 28022 MB/sec Dec 16 12:44:54.239608 kernel: xor: using function: arm64_neon (28022 MB/sec) Dec 16 12:44:54.291488 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:44:54.297340 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:44:54.299814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:54.335835 systemd-udevd[499]: Using default interface naming scheme 'v255'. Dec 16 12:44:54.339940 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:54.341813 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:44:54.368480 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Dec 16 12:44:54.394490 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:44:54.396814 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:44:54.454523 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:54.456780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:44:54.509640 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:44:54.509832 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 12:44:54.515424 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:54.515564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:54.525900 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:44:54.525922 kernel: GPT:9289727 != 19775487 Dec 16 12:44:54.525939 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:44:54.525948 kernel: GPT:9289727 != 19775487 Dec 16 12:44:54.525957 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:44:54.525966 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:44:54.520283 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:54.523669 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:54.552544 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:44:54.555491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:54.563106 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:44:54.564571 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:44:54.576181 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 12:44:54.577247 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:44:54.585970 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:44:54.587055 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:44:54.588886 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:54.590714 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:44:54.593080 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:44:54.594703 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:44:54.609219 disk-uuid[595]: Primary Header is updated. Dec 16 12:44:54.609219 disk-uuid[595]: Secondary Entries is updated. Dec 16 12:44:54.609219 disk-uuid[595]: Secondary Header is updated. Dec 16 12:44:54.613296 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:44:54.617046 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:44:55.621487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:44:55.622022 disk-uuid[598]: The operation has completed successfully. Dec 16 12:44:55.653368 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:44:55.653504 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:44:55.673128 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:44:55.698602 sh[614]: Success Dec 16 12:44:55.710573 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:44:55.710615 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:44:55.711549 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:44:55.720468 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:44:55.747872 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:44:55.750499 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:44:55.760671 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:44:55.765461 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (626) Dec 16 12:44:55.767465 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:44:55.767497 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:55.771462 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:44:55.771480 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:44:55.772155 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:44:55.773372 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:44:55.774558 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:44:55.775382 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:44:55.778122 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:44:55.800751 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Dec 16 12:44:55.802604 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:44:55.802636 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:55.805037 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:44:55.805076 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:44:55.809472 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:44:55.812504 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:44:55.814517 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:44:55.885532 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:44:55.888228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:44:55.910400 ignition[706]: Ignition 2.22.0 Dec 16 12:44:55.910412 ignition[706]: Stage: fetch-offline Dec 16 12:44:55.910469 ignition[706]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:55.910477 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:44:55.910560 ignition[706]: parsed url from cmdline: "" Dec 16 12:44:55.910563 ignition[706]: no config URL provided Dec 16 12:44:55.910567 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:44:55.910573 ignition[706]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:44:55.910593 ignition[706]: op(1): [started] loading QEMU firmware config module Dec 16 12:44:55.910597 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:44:55.919718 ignition[706]: op(1): [finished] loading QEMU firmware config module Dec 16 12:44:55.937000 systemd-networkd[806]: lo: Link UP Dec 16 12:44:55.937014 systemd-networkd[806]: lo: Gained carrier Dec 16 12:44:55.937717 systemd-networkd[806]: Enumeration completed Dec 16 12:44:55.938092 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:44:55.938096 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:55.938879 systemd-networkd[806]: eth0: Link UP Dec 16 12:44:55.938966 systemd-networkd[806]: eth0: Gained carrier Dec 16 12:44:55.938975 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:44:55.939568 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:44:55.940911 systemd[1]: Reached target network.target - Network. Dec 16 12:44:55.974508 ignition[706]: parsing config with SHA512: 937114b073eafe4b0cb4a254042adc0848a1ac1a5eacabd15dca79d24acfcea8bccc8529a6827715f3194a81094a9ee19f7b1d43b1c91f07bf207997faaed2e8 Dec 16 12:44:55.976515 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:44:55.980309 unknown[706]: fetched base config from "system" Dec 16 12:44:55.980320 unknown[706]: fetched user config from "qemu" Dec 16 12:44:55.980714 ignition[706]: fetch-offline: fetch-offline passed Dec 16 12:44:55.980770 ignition[706]: Ignition finished successfully Dec 16 12:44:55.982747 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:44:55.984361 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:44:55.985108 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:44:56.028892 ignition[814]: Ignition 2.22.0 Dec 16 12:44:56.028910 ignition[814]: Stage: kargs Dec 16 12:44:56.029046 ignition[814]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:56.029055 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:44:56.029840 ignition[814]: kargs: kargs passed Dec 16 12:44:56.029885 ignition[814]: Ignition finished successfully Dec 16 12:44:56.032402 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:44:56.034817 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:44:56.072040 ignition[823]: Ignition 2.22.0 Dec 16 12:44:56.072059 ignition[823]: Stage: disks Dec 16 12:44:56.072192 ignition[823]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:56.072201 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:44:56.072969 ignition[823]: disks: disks passed Dec 16 12:44:56.075670 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:44:56.073013 ignition[823]: Ignition finished successfully Dec 16 12:44:56.076954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:44:56.079551 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:44:56.081149 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:44:56.082417 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:44:56.083988 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:44:56.086405 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:44:56.119770 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:44:56.123998 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:44:56.126076 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:44:56.186482 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:44:56.186529 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:44:56.187624 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:44:56.189786 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:44:56.191606 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:44:56.192469 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:44:56.192513 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:44:56.192536 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:44:56.208170 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:44:56.210604 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:44:56.215416 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Dec 16 12:44:56.215440 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:44:56.215466 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:56.218689 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:44:56.218744 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:44:56.219850 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:44:56.247137 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:44:56.250479 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:44:56.254577 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:44:56.258351 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:44:56.325490 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:44:56.327755 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:44:56.329775 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:44:56.347499 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:44:56.360737 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:44:56.377691 ignition[955]: INFO : Ignition 2.22.0 Dec 16 12:44:56.377691 ignition[955]: INFO : Stage: mount Dec 16 12:44:56.379040 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:56.379040 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:44:56.379040 ignition[955]: INFO : mount: mount passed Dec 16 12:44:56.379040 ignition[955]: INFO : Ignition finished successfully Dec 16 12:44:56.381108 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:44:56.383133 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:44:56.772801 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:44:56.774306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:44:56.792362 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (967) Dec 16 12:44:56.792398 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:44:56.793472 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:56.795822 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:44:56.795839 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:44:56.797188 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:44:56.832352 ignition[984]: INFO : Ignition 2.22.0 Dec 16 12:44:56.832352 ignition[984]: INFO : Stage: files Dec 16 12:44:56.833810 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:56.833810 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:44:56.835622 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:44:56.835622 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:44:56.835622 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:44:56.839083 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:44:56.839083 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:44:56.839083 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:44:56.839083 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:44:56.839083 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:44:56.836985 unknown[984]: wrote ssh authorized keys file for user: core Dec 16 12:44:56.888485 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:44:57.022928 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:44:57.022928 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:44:57.026380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:44:57.041380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:44:57.041380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:44:57.041380 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 12:44:57.309593 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:44:57.502503 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:44:57.502503 ignition[984]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:44:57.505778 ignition[984]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:44:57.507363 ignition[984]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:44:57.507363 ignition[984]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:44:57.507363 ignition[984]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:44:57.507363 ignition[984]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:44:57.507363 ignition[984]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:44:57.507363 ignition[984]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:44:57.507363 ignition[984]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:44:57.520302 ignition[984]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:44:57.523771 ignition[984]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:44:57.524991 ignition[984]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:44:57.524991 ignition[984]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:44:57.524991 ignition[984]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:44:57.524991 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:44:57.524991 ignition[984]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:44:57.524991 ignition[984]: INFO : files: files passed Dec 16 12:44:57.524991 ignition[984]: INFO : Ignition finished successfully Dec 16 12:44:57.526676 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:44:57.529365 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:44:57.531021 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:44:57.546670 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:44:57.547973 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:44:57.548269 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:44:57.551224 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:57.551224 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:57.553746 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:57.553104 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:44:57.554882 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:44:57.557267 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:44:57.596317 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:44:57.596438 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:44:57.598516 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:44:57.600056 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:44:57.601525 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:44:57.602290 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:44:57.632881 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:44:57.635239 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:44:57.660741 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:57.661829 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:57.663581 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:44:57.665126 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:44:57.665259 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:44:57.667269 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:44:57.668947 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:44:57.670254 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:44:57.671679 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:44:57.673278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:44:57.674943 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:44:57.676652 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:44:57.678174 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:44:57.679797 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:44:57.681381 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:44:57.682909 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:44:57.684156 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:44:57.684292 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:44:57.686264 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:57.687996 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:57.689588 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:44:57.690534 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:57.692397 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:44:57.692546 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:44:57.695095 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:44:57.695217 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:44:57.696938 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:44:57.698216 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:44:57.703490 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:57.704623 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:44:57.706349 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:44:57.707770 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:44:57.707869 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:44:57.709203 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:44:57.709287 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:44:57.710577 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:44:57.710709 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:44:57.712227 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:44:57.712330 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:44:57.714421 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:44:57.715911 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:44:57.716040 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:57.718629 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:44:57.719926 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:44:57.720057 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:57.721696 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:44:57.721793 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:44:57.724183 systemd-networkd[806]: eth0: Gained IPv6LL Dec 16 12:44:57.727104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:44:57.732515 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:44:57.739966 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:44:57.745398 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:44:57.745535 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:44:57.747816 ignition[1039]: INFO : Ignition 2.22.0 Dec 16 12:44:57.747816 ignition[1039]: INFO : Stage: umount Dec 16 12:44:57.747816 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:57.747816 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:44:57.751162 ignition[1039]: INFO : umount: umount passed Dec 16 12:44:57.751162 ignition[1039]: INFO : Ignition finished successfully Dec 16 12:44:57.751061 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:44:57.751169 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:44:57.753755 systemd[1]: Stopped target network.target - Network. Dec 16 12:44:57.754530 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:44:57.754592 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:44:57.756206 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:44:57.756253 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:44:57.757886 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:44:57.757937 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:44:57.759264 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:44:57.759302 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:44:57.760649 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:44:57.760699 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:44:57.762327 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:44:57.763788 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:44:57.773620 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:44:57.773717 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:44:57.776596 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:44:57.777076 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:44:57.777152 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:57.779605 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:44:57.781261 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:44:57.781376 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:44:57.784187 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:44:57.784331 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:44:57.785715 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:44:57.785752 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:57.788589 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:44:57.789356 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:44:57.789411 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:44:57.791749 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:44:57.791816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:57.794291 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:44:57.794345 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:57.796456 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:57.801925 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:44:57.815528 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:44:57.815647 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:44:57.817542 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:44:57.817661 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:57.819713 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:44:57.819779 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:57.820867 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:44:57.820898 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:57.822792 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:44:57.822841 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:44:57.825272 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:44:57.825319 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:44:57.827911 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:44:57.827963 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:44:57.831364 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:44:57.832550 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:44:57.832610 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:57.835352 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:44:57.835392 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:57.838541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:57.838587 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:57.842799 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:44:57.842849 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:44:57.842881 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:44:57.858818 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:44:57.858938 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:44:57.861120 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:44:57.862869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:44:57.879633 systemd[1]: Switching root. Dec 16 12:44:57.907786 systemd-journald[245]: Journal stopped Dec 16 12:44:58.620732 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Dec 16 12:44:58.620784 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:44:58.620800 kernel: SELinux: policy capability open_perms=1 Dec 16 12:44:58.620810 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:44:58.620824 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:44:58.620836 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:44:58.620847 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:44:58.620857 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:44:58.620868 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:44:58.620877 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:44:58.620890 kernel: audit: type=1403 audit(1765889098.078:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:44:58.620905 systemd[1]: Successfully loaded SELinux policy in 55.691ms. Dec 16 12:44:58.620927 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.354ms. Dec 16 12:44:58.620939 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:44:58.620950 systemd[1]: Detected virtualization kvm. Dec 16 12:44:58.620960 systemd[1]: Detected architecture arm64. Dec 16 12:44:58.620985 systemd[1]: Detected first boot. Dec 16 12:44:58.620996 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:44:58.621007 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:44:58.621022 zram_generator::config[1087]: No configuration found. Dec 16 12:44:58.621032 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:44:58.621043 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:44:58.621057 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:44:58.621070 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:44:58.621080 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:44:58.621090 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:44:58.621100 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:44:58.621113 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:44:58.621123 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:44:58.621134 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:44:58.621150 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:44:58.621160 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:44:58.621172 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:44:58.621182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:58.621193 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:58.621203 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:44:58.621215 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:44:58.621226 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:44:58.621236 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:44:58.621246 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:44:58.621256 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:58.621267 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:58.621277 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:44:58.621287 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:44:58.621298 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:44:58.621308 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:44:58.621322 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:58.621345 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:44:58.621357 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:44:58.621367 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:44:58.621377 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:44:58.621386 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:44:58.621401 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:44:58.621415 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:58.621426 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:58.621436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:58.621452 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:44:58.621463 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:44:58.621473 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:44:58.621484 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:44:58.621494 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:44:58.621504 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:44:58.621517 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:44:58.621527 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:44:58.621537 systemd[1]: Reached target machines.target - Containers. Dec 16 12:44:58.621547 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:44:58.621557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:58.621568 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:44:58.621578 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:44:58.621588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:58.621599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:44:58.621609 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:58.621622 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:44:58.621633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:58.621643 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:44:58.621654 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:44:58.621663 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:44:58.621673 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:44:58.621683 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:44:58.621695 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:58.621705 kernel: loop: module loaded Dec 16 12:44:58.621715 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:44:58.621725 kernel: fuse: init (API version 7.41) Dec 16 12:44:58.621735 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:44:58.621746 kernel: ACPI: bus type drm_connector registered Dec 16 12:44:58.621755 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:44:58.621766 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:44:58.621777 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:44:58.621819 systemd-journald[1154]: Collecting audit messages is disabled. Dec 16 12:44:58.621841 systemd-journald[1154]: Journal started Dec 16 12:44:58.621862 systemd-journald[1154]: Runtime Journal (/run/log/journal/12d79a2b9d8b42d3b314c977e84ce4ec) is 6M, max 48.5M, 42.4M free. Dec 16 12:44:58.432867 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:44:58.444497 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:44:58.444889 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:44:58.624288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:44:58.625878 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:44:58.625906 systemd[1]: Stopped verity-setup.service. Dec 16 12:44:58.630462 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:44:58.631010 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:44:58.632019 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:44:58.633157 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:44:58.634092 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:44:58.635110 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:44:58.636159 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:44:58.638473 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:44:58.639713 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:58.641022 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:44:58.641205 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:44:58.642522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:58.642679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:58.643829 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:44:58.643987 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:44:58.645273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:58.645469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:58.646669 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:44:58.646820 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:44:58.647944 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:58.648102 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:58.649999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:58.651231 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:58.652621 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:44:58.653903 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:44:58.665028 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:44:58.667130 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:44:58.669043 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:44:58.670055 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:44:58.670084 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:44:58.671757 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:44:58.677235 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:44:58.678267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:58.679250 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:44:58.681034 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:44:58.682245 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:44:58.683570 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:44:58.684555 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:44:58.685635 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:44:58.689435 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:44:58.691976 systemd-journald[1154]: Time spent on flushing to /var/log/journal/12d79a2b9d8b42d3b314c977e84ce4ec is 23.495ms for 885 entries. Dec 16 12:44:58.691976 systemd-journald[1154]: System Journal (/var/log/journal/12d79a2b9d8b42d3b314c977e84ce4ec) is 8M, max 195.6M, 187.6M free. Dec 16 12:44:58.727847 systemd-journald[1154]: Received client request to flush runtime journal. Dec 16 12:44:58.727899 kernel: loop0: detected capacity change from 0 to 211168 Dec 16 12:44:58.727918 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:44:58.691628 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:44:58.694950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:58.698524 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:44:58.699545 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:44:58.713615 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:44:58.715280 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:44:58.720607 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:44:58.722175 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:58.731885 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:44:58.733891 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:44:58.738002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:44:58.756487 kernel: loop1: detected capacity change from 0 to 100632 Dec 16 12:44:58.759473 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:44:58.767214 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Dec 16 12:44:58.767228 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Dec 16 12:44:58.771190 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:58.782507 kernel: loop2: detected capacity change from 0 to 119840 Dec 16 12:44:58.824479 kernel: loop3: detected capacity change from 0 to 211168 Dec 16 12:44:58.831474 kernel: loop4: detected capacity change from 0 to 100632 Dec 16 12:44:58.837496 kernel: loop5: detected capacity change from 0 to 119840 Dec 16 12:44:58.841546 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 12:44:58.841919 (sd-merge)[1225]: Merged extensions into '/usr'. Dec 16 12:44:58.848524 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:44:58.848541 systemd[1]: Reloading... Dec 16 12:44:58.895513 zram_generator::config[1247]: No configuration found. Dec 16 12:44:58.967656 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:44:59.047536 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:44:59.048026 systemd[1]: Reloading finished in 199 ms. Dec 16 12:44:59.078160 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:44:59.079498 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:44:59.092655 systemd[1]: Starting ensure-sysext.service... Dec 16 12:44:59.094399 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:44:59.104858 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:44:59.104876 systemd[1]: Reloading... Dec 16 12:44:59.116939 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:44:59.117354 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:44:59.117783 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:44:59.118117 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:44:59.118828 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:44:59.119121 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 16 12:44:59.119234 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 16 12:44:59.123976 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:44:59.124083 systemd-tmpfiles[1286]: Skipping /boot Dec 16 12:44:59.129806 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:44:59.129915 systemd-tmpfiles[1286]: Skipping /boot Dec 16 12:44:59.156476 zram_generator::config[1312]: No configuration found. Dec 16 12:44:59.284158 systemd[1]: Reloading finished in 178 ms. Dec 16 12:44:59.303310 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:44:59.319478 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:59.326990 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:44:59.329557 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:44:59.343427 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:44:59.347023 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:44:59.351387 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:59.355673 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:44:59.359533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:59.361789 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:59.364755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:59.368658 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:59.369732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:59.369849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:59.371639 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:44:59.374198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:59.374826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:59.379026 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:59.379256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:59.381277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:59.381467 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:59.385419 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Dec 16 12:44:59.388816 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:44:59.397514 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:44:59.400663 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:59.402709 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:59.405774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:59.408850 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:59.410064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:59.410190 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:59.411665 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:44:59.418199 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:44:59.419979 augenrules[1386]: No rules Dec 16 12:44:59.420079 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:59.422681 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:44:59.429706 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:44:59.431260 systemd[1]: Finished ensure-sysext.service. Dec 16 12:44:59.432495 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:44:59.434742 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:59.434892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:59.438067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:59.438228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:59.439683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:59.439830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:59.465075 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:44:59.475091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:59.476994 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:44:59.478608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:59.478661 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:59.482019 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:44:59.482935 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:44:59.483004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:44:59.486563 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:44:59.487668 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:44:59.504767 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:44:59.505021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:44:59.550025 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:44:59.566291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:44:59.572590 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:44:59.581341 systemd-networkd[1432]: lo: Link UP Dec 16 12:44:59.581351 systemd-networkd[1432]: lo: Gained carrier Dec 16 12:44:59.582220 systemd-networkd[1432]: Enumeration completed Dec 16 12:44:59.582378 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:44:59.582933 systemd-resolved[1352]: Positive Trust Anchors: Dec 16 12:44:59.582969 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:44:59.583003 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:44:59.589534 systemd-resolved[1352]: Defaulting to hostname 'linux'. Dec 16 12:44:59.591975 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:44:59.591985 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:59.593648 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:44:59.595905 systemd-networkd[1432]: eth0: Link UP Dec 16 12:44:59.596019 systemd-networkd[1432]: eth0: Gained carrier Dec 16 12:44:59.596039 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:44:59.596743 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:44:59.597980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:44:59.599226 systemd[1]: Reached target network.target - Network. Dec 16 12:44:59.601547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:59.603143 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:44:59.604671 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:44:59.605807 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:44:59.607164 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:44:59.608235 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:44:59.608509 systemd-networkd[1432]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:44:59.609708 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:44:59.609860 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:44:59.610245 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Dec 16 12:44:59.610815 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:44:59.611259 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:44:59.611601 systemd-timesyncd[1433]: Initial clock synchronization to Tue 2025-12-16 12:44:59.375134 UTC. Dec 16 12:44:59.612344 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:44:59.613611 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:44:59.614659 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:44:59.616103 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:44:59.618520 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:44:59.620958 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:44:59.622223 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:44:59.623705 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:44:59.633157 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:44:59.634534 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:44:59.639492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:44:59.640919 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:44:59.643773 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:44:59.646238 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:44:59.647168 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:44:59.648055 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:44:59.648089 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:44:59.650637 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:44:59.653422 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:44:59.665209 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:44:59.669582 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:44:59.671598 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:44:59.672940 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:44:59.674708 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:44:59.677532 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:44:59.681925 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:44:59.686313 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:44:59.687279 jq[1470]: false Dec 16 12:44:59.690313 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:44:59.694535 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:44:59.695031 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:44:59.696101 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:44:59.697910 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:44:59.698015 extend-filesystems[1471]: Found /dev/vda6 Dec 16 12:44:59.703886 extend-filesystems[1471]: Found /dev/vda9 Dec 16 12:44:59.703519 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:44:59.705362 extend-filesystems[1471]: Checking size of /dev/vda9 Dec 16 12:44:59.704865 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:44:59.708650 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:44:59.708944 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:44:59.709090 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:44:59.711235 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:44:59.711439 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:44:59.712129 jq[1486]: true Dec 16 12:44:59.717438 extend-filesystems[1471]: Resized partition /dev/vda9 Dec 16 12:44:59.722154 extend-filesystems[1501]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:44:59.730591 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:44:59.733261 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 12:44:59.736599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:59.747145 tar[1494]: linux-arm64/LICENSE Dec 16 12:44:59.752554 jq[1495]: true Dec 16 12:44:59.758334 update_engine[1485]: I20251216 12:44:59.758008 1485 main.cc:92] Flatcar Update Engine starting Dec 16 12:44:59.769469 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 12:44:59.769862 dbus-daemon[1467]: [system] SELinux support is enabled Dec 16 12:44:59.771930 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:44:59.787675 extend-filesystems[1501]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:44:59.787675 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:44:59.787675 extend-filesystems[1501]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 12:44:59.794380 update_engine[1485]: I20251216 12:44:59.774681 1485 update_check_scheduler.cc:74] Next update check in 9m44s Dec 16 12:44:59.776906 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:44:59.794702 extend-filesystems[1471]: Resized filesystem in /dev/vda9 Dec 16 12:44:59.798113 tar[1494]: linux-arm64/helm Dec 16 12:44:59.776931 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:44:59.778207 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:44:59.778223 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:44:59.779415 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:44:59.782688 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:44:59.789365 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:44:59.789585 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:44:59.792076 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:44:59.792304 systemd-logind[1482]: New seat seat0. Dec 16 12:44:59.794533 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:44:59.840517 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:44:59.843708 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:44:59.843787 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:44:59.845320 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:44:59.859382 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:59.915488 containerd[1496]: time="2025-12-16T12:44:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:44:59.916630 containerd[1496]: time="2025-12-16T12:44:59.916593440Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.927711880Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.72µs" Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.927750680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.927768320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.927914320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.927929280Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.927951440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.927996120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.928007520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.928220880Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.928234120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.928243680Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:44:59.928479 containerd[1496]: time="2025-12-16T12:44:59.928251320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:44:59.928728 containerd[1496]: time="2025-12-16T12:44:59.928337920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:44:59.928885 containerd[1496]: time="2025-12-16T12:44:59.928860360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:44:59.928961 containerd[1496]: time="2025-12-16T12:44:59.928946320Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:44:59.929014 containerd[1496]: time="2025-12-16T12:44:59.929002640Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:44:59.929087 containerd[1496]: time="2025-12-16T12:44:59.929074000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:44:59.929511 containerd[1496]: time="2025-12-16T12:44:59.929472160Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:44:59.929604 containerd[1496]: time="2025-12-16T12:44:59.929585960Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935164000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935224400Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935239200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935250200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935261800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935272800Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935284120Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935294920Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935306280Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935317400Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935336840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:44:59.935473 containerd[1496]: time="2025-12-16T12:44:59.935350280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935492560Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935514440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935528520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935539640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935550320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935561200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935580040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935591160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935602560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935615320Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:44:59.935738 containerd[1496]: time="2025-12-16T12:44:59.935625720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:44:59.935919 containerd[1496]: time="2025-12-16T12:44:59.935793280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:44:59.935919 containerd[1496]: time="2025-12-16T12:44:59.935807240Z" level=info msg="Start snapshots syncer" Dec 16 12:44:59.935919 containerd[1496]: time="2025-12-16T12:44:59.935837400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:44:59.936238 containerd[1496]: time="2025-12-16T12:44:59.936049640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:44:59.936238 containerd[1496]: time="2025-12-16T12:44:59.936109640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936167280Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936266720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936288160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936299560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936309920Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936322240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936348760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936360360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936384560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936395040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936405800Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936429280Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936441600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:44:59.936722 containerd[1496]: time="2025-12-16T12:44:59.936475040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:44:59.937149 containerd[1496]: time="2025-12-16T12:44:59.936485680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:44:59.937149 containerd[1496]: time="2025-12-16T12:44:59.936494400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:44:59.937149 containerd[1496]: time="2025-12-16T12:44:59.936513480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:44:59.937149 containerd[1496]: time="2025-12-16T12:44:59.936524400Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:44:59.937149 containerd[1496]: time="2025-12-16T12:44:59.936602080Z" level=info msg="runtime interface created" Dec 16 12:44:59.937149 containerd[1496]: time="2025-12-16T12:44:59.936607480Z" level=info msg="created NRI interface" Dec 16 12:44:59.937149 containerd[1496]: time="2025-12-16T12:44:59.936615320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:44:59.937149 containerd[1496]: time="2025-12-16T12:44:59.936626160Z" level=info msg="Connect containerd service" Dec 16 12:44:59.937149 containerd[1496]: time="2025-12-16T12:44:59.936647320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:44:59.937399 containerd[1496]: time="2025-12-16T12:44:59.937311080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:45:00.015565 containerd[1496]: time="2025-12-16T12:45:00.015482536Z" level=info msg="Start subscribing containerd event" Dec 16 12:45:00.015565 containerd[1496]: time="2025-12-16T12:45:00.015571628Z" level=info msg="Start recovering state" Dec 16 12:45:00.015850 containerd[1496]: time="2025-12-16T12:45:00.015656566Z" level=info msg="Start event monitor" Dec 16 12:45:00.015850 containerd[1496]: time="2025-12-16T12:45:00.015669376Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:45:00.015850 containerd[1496]: time="2025-12-16T12:45:00.015676403Z" level=info msg="Start streaming server" Dec 16 12:45:00.015850 containerd[1496]: time="2025-12-16T12:45:00.015687233Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:45:00.015850 containerd[1496]: time="2025-12-16T12:45:00.015694687Z" level=info msg="runtime interface starting up..." Dec 16 12:45:00.015850 containerd[1496]: time="2025-12-16T12:45:00.015699617Z" level=info msg="starting plugins..." Dec 16 12:45:00.015850 containerd[1496]: time="2025-12-16T12:45:00.015712427Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:45:00.016281 containerd[1496]: time="2025-12-16T12:45:00.016201869Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:45:00.016339 containerd[1496]: time="2025-12-16T12:45:00.016264835Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:45:00.018984 containerd[1496]: time="2025-12-16T12:45:00.018957229Z" level=info msg="containerd successfully booted in 0.103838s" Dec 16 12:45:00.019015 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:45:00.078384 tar[1494]: linux-arm64/README.md Dec 16 12:45:00.098837 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:45:00.837375 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:45:00.857015 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:45:00.859540 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:45:00.878504 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:45:00.878732 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:45:00.881194 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:45:00.906486 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:45:00.909029 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:45:00.911131 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:45:00.912303 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:45:01.306610 systemd-networkd[1432]: eth0: Gained IPv6LL Dec 16 12:45:01.309130 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:45:01.310761 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:45:01.312964 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:45:01.315146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:01.325098 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:45:01.339282 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:45:01.339544 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:45:01.340928 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:45:01.345233 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:45:01.905367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:01.906811 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:45:01.909504 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:01.910987 systemd[1]: Startup finished in 2.071s (kernel) + 4.468s (initrd) + 3.888s (userspace) = 10.428s. Dec 16 12:45:02.270487 kubelet[1608]: E1216 12:45:02.270368 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:02.273008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:02.273167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:02.273511 systemd[1]: kubelet.service: Consumed 762ms CPU time, 259.6M memory peak. Dec 16 12:45:06.723660 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:45:06.724571 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:41442.service - OpenSSH per-connection server daemon (10.0.0.1:41442). Dec 16 12:45:06.803614 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 41442 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:45:06.805318 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:06.811041 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:45:06.812010 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:45:06.817331 systemd-logind[1482]: New session 1 of user core. Dec 16 12:45:06.831637 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:45:06.834181 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:45:06.863485 (systemd)[1627]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:45:06.865627 systemd-logind[1482]: New session c1 of user core. Dec 16 12:45:06.974196 systemd[1627]: Queued start job for default target default.target. Dec 16 12:45:06.986309 systemd[1627]: Created slice app.slice - User Application Slice. Dec 16 12:45:06.986337 systemd[1627]: Reached target paths.target - Paths. Dec 16 12:45:06.986375 systemd[1627]: Reached target timers.target - Timers. Dec 16 12:45:06.987512 systemd[1627]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:45:06.996513 systemd[1627]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:45:06.996577 systemd[1627]: Reached target sockets.target - Sockets. Dec 16 12:45:06.996612 systemd[1627]: Reached target basic.target - Basic System. Dec 16 12:45:06.996639 systemd[1627]: Reached target default.target - Main User Target. Dec 16 12:45:06.996678 systemd[1627]: Startup finished in 125ms. Dec 16 12:45:06.996781 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:45:06.997994 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:45:07.055773 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:41458.service - OpenSSH per-connection server daemon (10.0.0.1:41458). Dec 16 12:45:07.101993 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 41458 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:45:07.103249 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:07.106978 systemd-logind[1482]: New session 2 of user core. Dec 16 12:45:07.115640 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:45:07.165778 sshd[1642]: Connection closed by 10.0.0.1 port 41458 Dec 16 12:45:07.166220 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:07.179967 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:41458.service: Deactivated successfully. Dec 16 12:45:07.181404 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:45:07.184588 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:45:07.185423 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:41472.service - OpenSSH per-connection server daemon (10.0.0.1:41472). Dec 16 12:45:07.188048 systemd-logind[1482]: Removed session 2. Dec 16 12:45:07.244126 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 41472 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:45:07.245249 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:07.250415 systemd-logind[1482]: New session 3 of user core. Dec 16 12:45:07.267664 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:45:07.315777 sshd[1651]: Connection closed by 10.0.0.1 port 41472 Dec 16 12:45:07.316133 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:07.331259 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:41472.service: Deactivated successfully. Dec 16 12:45:07.333420 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:45:07.334152 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:45:07.338516 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:41478.service - OpenSSH per-connection server daemon (10.0.0.1:41478). Dec 16 12:45:07.339123 systemd-logind[1482]: Removed session 3. Dec 16 12:45:07.392626 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 41478 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:45:07.394032 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:07.397870 systemd-logind[1482]: New session 4 of user core. Dec 16 12:45:07.407639 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:45:07.458609 sshd[1660]: Connection closed by 10.0.0.1 port 41478 Dec 16 12:45:07.459077 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:07.474479 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:41478.service: Deactivated successfully. Dec 16 12:45:07.476849 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:45:07.477459 systemd-logind[1482]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:45:07.479544 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:41490.service - OpenSSH per-connection server daemon (10.0.0.1:41490). Dec 16 12:45:07.480137 systemd-logind[1482]: Removed session 4. Dec 16 12:45:07.536573 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 41490 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:45:07.537790 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:07.541496 systemd-logind[1482]: New session 5 of user core. Dec 16 12:45:07.547583 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:45:07.602722 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:45:07.602973 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:07.620341 sudo[1670]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:07.621871 sshd[1669]: Connection closed by 10.0.0.1 port 41490 Dec 16 12:45:07.622476 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:07.632482 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:41490.service: Deactivated successfully. Dec 16 12:45:07.633955 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:45:07.634695 systemd-logind[1482]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:45:07.636933 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:41498.service - OpenSSH per-connection server daemon (10.0.0.1:41498). Dec 16 12:45:07.637620 systemd-logind[1482]: Removed session 5. Dec 16 12:45:07.691967 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 41498 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:45:07.693247 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:07.696963 systemd-logind[1482]: New session 6 of user core. Dec 16 12:45:07.706673 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:45:07.756744 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:45:07.757013 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:08.030694 sudo[1681]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:08.036137 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:45:08.036392 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:08.045898 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:45:08.085383 augenrules[1703]: No rules Dec 16 12:45:08.086521 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:45:08.086738 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:45:08.088696 sudo[1680]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:08.090310 sshd[1679]: Connection closed by 10.0.0.1 port 41498 Dec 16 12:45:08.090189 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:08.099354 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:41498.service: Deactivated successfully. Dec 16 12:45:08.101695 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:45:08.103249 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:45:08.105213 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:41504.service - OpenSSH per-connection server daemon (10.0.0.1:41504). Dec 16 12:45:08.105965 systemd-logind[1482]: Removed session 6. Dec 16 12:45:08.164667 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 41504 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:45:08.165936 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:08.170070 systemd-logind[1482]: New session 7 of user core. Dec 16 12:45:08.176634 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:45:08.227086 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:45:08.227334 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:08.503630 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:45:08.520788 (dockerd)[1737]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:45:08.723484 dockerd[1737]: time="2025-12-16T12:45:08.723230240Z" level=info msg="Starting up" Dec 16 12:45:08.726007 dockerd[1737]: time="2025-12-16T12:45:08.725984831Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:45:08.736516 dockerd[1737]: time="2025-12-16T12:45:08.736407414Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:45:08.767166 dockerd[1737]: time="2025-12-16T12:45:08.767057488Z" level=info msg="Loading containers: start." Dec 16 12:45:08.776518 kernel: Initializing XFRM netlink socket Dec 16 12:45:08.982151 systemd-networkd[1432]: docker0: Link UP Dec 16 12:45:09.041954 dockerd[1737]: time="2025-12-16T12:45:09.041741453Z" level=info msg="Loading containers: done." Dec 16 12:45:09.053182 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1935702813-merged.mount: Deactivated successfully. Dec 16 12:45:09.055742 dockerd[1737]: time="2025-12-16T12:45:09.055411910Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:45:09.055742 dockerd[1737]: time="2025-12-16T12:45:09.055520182Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:45:09.055742 dockerd[1737]: time="2025-12-16T12:45:09.055592732Z" level=info msg="Initializing buildkit" Dec 16 12:45:09.077396 dockerd[1737]: time="2025-12-16T12:45:09.077358899Z" level=info msg="Completed buildkit initialization" Dec 16 12:45:09.083382 dockerd[1737]: time="2025-12-16T12:45:09.083344055Z" level=info msg="Daemon has completed initialization" Dec 16 12:45:09.083538 dockerd[1737]: time="2025-12-16T12:45:09.083506997Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:45:09.083595 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:45:09.570520 containerd[1496]: time="2025-12-16T12:45:09.570483737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:45:10.040037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176191689.mount: Deactivated successfully. Dec 16 12:45:11.178462 containerd[1496]: time="2025-12-16T12:45:11.178395055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:11.179735 containerd[1496]: time="2025-12-16T12:45:11.179691987Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387283" Dec 16 12:45:11.180608 containerd[1496]: time="2025-12-16T12:45:11.180575387Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:11.185877 containerd[1496]: time="2025-12-16T12:45:11.185822788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:11.186631 containerd[1496]: time="2025-12-16T12:45:11.186592763Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.616068844s" Dec 16 12:45:11.186678 containerd[1496]: time="2025-12-16T12:45:11.186630545Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 12:45:11.187808 containerd[1496]: time="2025-12-16T12:45:11.187723472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:45:12.377878 containerd[1496]: time="2025-12-16T12:45:12.377838574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:12.379155 containerd[1496]: time="2025-12-16T12:45:12.379101663Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553083" Dec 16 12:45:12.380147 containerd[1496]: time="2025-12-16T12:45:12.380114847Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:12.382374 containerd[1496]: time="2025-12-16T12:45:12.382349807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:12.384022 containerd[1496]: time="2025-12-16T12:45:12.383977596Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.196224997s" Dec 16 12:45:12.384022 containerd[1496]: time="2025-12-16T12:45:12.384018909Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 12:45:12.384381 containerd[1496]: time="2025-12-16T12:45:12.384363410Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:45:12.523488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:45:12.524832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:12.655354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:12.659744 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:12.728874 kubelet[2027]: E1216 12:45:12.728820 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:12.732816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:12.732965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:12.734547 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.6M memory peak. Dec 16 12:45:13.570863 containerd[1496]: time="2025-12-16T12:45:13.570799468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:13.572072 containerd[1496]: time="2025-12-16T12:45:13.572044919Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298069" Dec 16 12:45:13.573220 containerd[1496]: time="2025-12-16T12:45:13.573169561Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:13.576467 containerd[1496]: time="2025-12-16T12:45:13.576417306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:13.577468 containerd[1496]: time="2025-12-16T12:45:13.577410475Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.193022207s" Dec 16 12:45:13.577468 containerd[1496]: time="2025-12-16T12:45:13.577442150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 12:45:13.577864 containerd[1496]: time="2025-12-16T12:45:13.577839433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:45:14.494060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount551785923.mount: Deactivated successfully. Dec 16 12:45:14.741145 containerd[1496]: time="2025-12-16T12:45:14.741098610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:14.741727 containerd[1496]: time="2025-12-16T12:45:14.741693173Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258675" Dec 16 12:45:14.742723 containerd[1496]: time="2025-12-16T12:45:14.742693880Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:14.744607 containerd[1496]: time="2025-12-16T12:45:14.744505520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:14.745245 containerd[1496]: time="2025-12-16T12:45:14.745022238Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.167142763s" Dec 16 12:45:14.745245 containerd[1496]: time="2025-12-16T12:45:14.745057159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 12:45:14.745724 containerd[1496]: time="2025-12-16T12:45:14.745525936Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:45:15.288800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76719384.mount: Deactivated successfully. Dec 16 12:45:16.356145 containerd[1496]: time="2025-12-16T12:45:16.356079174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:16.356673 containerd[1496]: time="2025-12-16T12:45:16.356639177Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Dec 16 12:45:16.357719 containerd[1496]: time="2025-12-16T12:45:16.357686277Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:16.369089 containerd[1496]: time="2025-12-16T12:45:16.369027275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:16.370254 containerd[1496]: time="2025-12-16T12:45:16.370210181Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.624656161s" Dec 16 12:45:16.370254 containerd[1496]: time="2025-12-16T12:45:16.370247690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 12:45:16.370850 containerd[1496]: time="2025-12-16T12:45:16.370765998Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:45:16.790490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809828619.mount: Deactivated successfully. Dec 16 12:45:16.799453 containerd[1496]: time="2025-12-16T12:45:16.798168102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:16.799453 containerd[1496]: time="2025-12-16T12:45:16.798697332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 16 12:45:16.799936 containerd[1496]: time="2025-12-16T12:45:16.799907024Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:16.802171 containerd[1496]: time="2025-12-16T12:45:16.802139859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:16.803253 containerd[1496]: time="2025-12-16T12:45:16.803223870Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 432.429013ms" Dec 16 12:45:16.803253 containerd[1496]: time="2025-12-16T12:45:16.803253048Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:45:16.803834 containerd[1496]: time="2025-12-16T12:45:16.803779448Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:45:17.310831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648238114.mount: Deactivated successfully. Dec 16 12:45:18.814956 containerd[1496]: time="2025-12-16T12:45:18.814906272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:18.815782 containerd[1496]: time="2025-12-16T12:45:18.815757915Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013653" Dec 16 12:45:18.816513 containerd[1496]: time="2025-12-16T12:45:18.816489599Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:18.819489 containerd[1496]: time="2025-12-16T12:45:18.819344644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:18.821379 containerd[1496]: time="2025-12-16T12:45:18.821350561Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.01754715s" Dec 16 12:45:18.821379 containerd[1496]: time="2025-12-16T12:45:18.821381837Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 12:45:22.983342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:45:22.984813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:23.152669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:23.160986 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:23.196566 kubelet[2191]: E1216 12:45:23.196518 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:23.199342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:23.199504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:23.200044 systemd[1]: kubelet.service: Consumed 144ms CPU time, 106.9M memory peak. Dec 16 12:45:23.481687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:23.481827 systemd[1]: kubelet.service: Consumed 144ms CPU time, 106.9M memory peak. Dec 16 12:45:23.484668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:23.506680 systemd[1]: Reload requested from client PID 2204 ('systemctl') (unit session-7.scope)... Dec 16 12:45:23.506698 systemd[1]: Reloading... Dec 16 12:45:23.576415 zram_generator::config[2252]: No configuration found. Dec 16 12:45:23.823100 systemd[1]: Reloading finished in 316 ms. Dec 16 12:45:23.884085 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:45:23.884180 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:45:23.884515 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:23.884587 systemd[1]: kubelet.service: Consumed 94ms CPU time, 95M memory peak. Dec 16 12:45:23.886512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:24.036920 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:45:24.037617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:24.084193 kubelet[2293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:24.084193 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:45:24.084193 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:24.084193 kubelet[2293]: I1216 12:45:24.084153 2293 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:45:24.761113 kubelet[2293]: I1216 12:45:24.761057 2293 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:45:24.761113 kubelet[2293]: I1216 12:45:24.761090 2293 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:45:24.761369 kubelet[2293]: I1216 12:45:24.761341 2293 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:45:24.784571 kubelet[2293]: I1216 12:45:24.784521 2293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:45:24.785672 kubelet[2293]: E1216 12:45:24.785636 2293 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:45:24.794697 kubelet[2293]: I1216 12:45:24.794650 2293 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:45:24.797612 kubelet[2293]: I1216 12:45:24.797579 2293 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:45:24.798753 kubelet[2293]: I1216 12:45:24.798708 2293 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:45:24.801919 kubelet[2293]: I1216 12:45:24.798759 2293 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:45:24.801919 kubelet[2293]: I1216 12:45:24.801808 2293 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:45:24.801919 kubelet[2293]: I1216 12:45:24.801823 2293 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:45:24.803903 kubelet[2293]: I1216 12:45:24.802812 2293 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:24.806592 kubelet[2293]: I1216 12:45:24.806540 2293 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:45:24.806893 kubelet[2293]: I1216 12:45:24.806714 2293 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:45:24.808329 kubelet[2293]: E1216 12:45:24.808064 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:45:24.808501 kubelet[2293]: I1216 12:45:24.808486 2293 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:45:24.809476 kubelet[2293]: I1216 12:45:24.809459 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:45:24.810058 kubelet[2293]: E1216 12:45:24.810022 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:45:24.810856 kubelet[2293]: I1216 12:45:24.810786 2293 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:45:24.811610 kubelet[2293]: I1216 12:45:24.811541 2293 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:45:24.811684 kubelet[2293]: W1216 12:45:24.811668 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:45:24.814311 kubelet[2293]: I1216 12:45:24.814189 2293 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:45:24.814311 kubelet[2293]: I1216 12:45:24.814244 2293 server.go:1289] "Started kubelet" Dec 16 12:45:24.815530 kubelet[2293]: I1216 12:45:24.815502 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:45:24.819469 kubelet[2293]: I1216 12:45:24.819233 2293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:45:24.820621 kubelet[2293]: I1216 12:45:24.820601 2293 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:45:24.820776 kubelet[2293]: I1216 12:45:24.820753 2293 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:45:24.821786 kubelet[2293]: I1216 12:45:24.821767 2293 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:45:24.824713 kubelet[2293]: E1216 12:45:24.824600 2293 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:45:24.824815 kubelet[2293]: E1216 12:45:24.824794 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:24.824857 kubelet[2293]: I1216 12:45:24.824837 2293 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:45:24.825066 kubelet[2293]: I1216 12:45:24.825041 2293 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:45:24.825417 kubelet[2293]: I1216 12:45:24.825375 2293 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:45:24.825510 kubelet[2293]: I1216 12:45:24.825397 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:45:24.826584 kubelet[2293]: E1216 12:45:24.821571 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b2cd8d46bb43 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:45:24.814207811 +0000 UTC m=+0.773824929,LastTimestamp:2025-12-16 12:45:24.814207811 +0000 UTC m=+0.773824929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:45:24.826742 kubelet[2293]: E1216 12:45:24.826713 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:45:24.826930 kubelet[2293]: I1216 12:45:24.826907 2293 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:45:24.827033 kubelet[2293]: I1216 12:45:24.827014 2293 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:45:24.827252 kubelet[2293]: E1216 12:45:24.827219 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" Dec 16 12:45:24.828452 kubelet[2293]: I1216 12:45:24.828423 2293 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:45:24.841314 kubelet[2293]: I1216 12:45:24.841289 2293 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:45:24.841520 kubelet[2293]: I1216 12:45:24.841507 2293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:45:24.841592 kubelet[2293]: I1216 12:45:24.841573 2293 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:24.846787 kubelet[2293]: I1216 12:45:24.846735 2293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:45:24.848681 kubelet[2293]: I1216 12:45:24.848640 2293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:45:24.848681 kubelet[2293]: I1216 12:45:24.848679 2293 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:45:24.848839 kubelet[2293]: I1216 12:45:24.848703 2293 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:45:24.848839 kubelet[2293]: I1216 12:45:24.848711 2293 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:45:24.848839 kubelet[2293]: E1216 12:45:24.848765 2293 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:45:24.925997 kubelet[2293]: E1216 12:45:24.925903 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:24.949339 kubelet[2293]: E1216 12:45:24.949264 2293 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:45:25.026793 kubelet[2293]: E1216 12:45:25.026635 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:25.027718 kubelet[2293]: E1216 12:45:25.027643 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:45:25.028169 kubelet[2293]: E1216 12:45:25.028126 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" Dec 16 12:45:25.028921 kubelet[2293]: I1216 12:45:25.028628 2293 policy_none.go:49] "None policy: Start" Dec 16 12:45:25.028921 kubelet[2293]: I1216 12:45:25.028663 2293 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:45:25.028921 kubelet[2293]: I1216 12:45:25.028676 2293 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:45:25.036022 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:45:25.054264 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:45:25.071785 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:45:25.073405 kubelet[2293]: E1216 12:45:25.073301 2293 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:45:25.073794 kubelet[2293]: I1216 12:45:25.073552 2293 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:45:25.073794 kubelet[2293]: I1216 12:45:25.073569 2293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:45:25.074011 kubelet[2293]: I1216 12:45:25.073939 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:45:25.075179 kubelet[2293]: E1216 12:45:25.075140 2293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:45:25.075239 kubelet[2293]: E1216 12:45:25.075202 2293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:45:25.160387 systemd[1]: Created slice kubepods-burstable-pode38d14adeff457e72b5da13e7659088a.slice - libcontainer container kubepods-burstable-pode38d14adeff457e72b5da13e7659088a.slice. Dec 16 12:45:25.175698 kubelet[2293]: I1216 12:45:25.175668 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:45:25.176202 kubelet[2293]: E1216 12:45:25.176122 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Dec 16 12:45:25.178461 kubelet[2293]: E1216 12:45:25.178304 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:45:25.180375 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 12:45:25.194838 kubelet[2293]: E1216 12:45:25.194777 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:45:25.197825 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 12:45:25.199678 kubelet[2293]: E1216 12:45:25.199495 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:45:25.227759 kubelet[2293]: I1216 12:45:25.227719 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e38d14adeff457e72b5da13e7659088a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e38d14adeff457e72b5da13e7659088a\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:25.227923 kubelet[2293]: I1216 12:45:25.227906 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:25.228006 kubelet[2293]: I1216 12:45:25.227991 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:25.228095 kubelet[2293]: I1216 12:45:25.228084 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:25.228230 kubelet[2293]: I1216 12:45:25.228176 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:45:25.228230 kubelet[2293]: I1216 12:45:25.228195 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e38d14adeff457e72b5da13e7659088a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e38d14adeff457e72b5da13e7659088a\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:25.228296 kubelet[2293]: I1216 12:45:25.228215 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e38d14adeff457e72b5da13e7659088a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e38d14adeff457e72b5da13e7659088a\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:25.228441 kubelet[2293]: I1216 12:45:25.228350 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:25.228441 kubelet[2293]: I1216 12:45:25.228392 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:25.377604 kubelet[2293]: I1216 12:45:25.377502 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:45:25.377937 kubelet[2293]: E1216 12:45:25.377908 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Dec 16 12:45:25.429420 kubelet[2293]: E1216 12:45:25.429360 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" Dec 16 12:45:25.479871 kubelet[2293]: E1216 12:45:25.479804 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:25.481163 containerd[1496]: time="2025-12-16T12:45:25.480541072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e38d14adeff457e72b5da13e7659088a,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:25.496429 kubelet[2293]: E1216 12:45:25.495981 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:25.496691 containerd[1496]: time="2025-12-16T12:45:25.496637437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:25.500982 kubelet[2293]: E1216 12:45:25.500918 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:25.501548 containerd[1496]: time="2025-12-16T12:45:25.501501378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:25.566063 containerd[1496]: time="2025-12-16T12:45:25.565951971Z" level=info msg="connecting to shim 2ec25cab8672ee0f0d2c50805e3423d96936a9e57575bcb3f356cb243d2fec2e" address="unix:///run/containerd/s/c27068872078e8616e9c6fa7375d716d4af256beb6d7051ce9e264b16b39df53" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:25.574108 containerd[1496]: time="2025-12-16T12:45:25.574046605Z" level=info msg="connecting to shim 118ff409c9d82d29b81d63ab332dee7da840c6091acf7a11ff3a2ecf8b3462f3" address="unix:///run/containerd/s/a0c67611a465a7d83996861f432e7347105d0b56fc342bc3b8006e3f71dfeaff" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:25.576519 containerd[1496]: time="2025-12-16T12:45:25.575017547Z" level=info msg="connecting to shim 3b89c380a4aaa462532759fe1920b304490bfa86bbc6b8d4d9dccd660fd7dc33" address="unix:///run/containerd/s/c24766e8929a1bd2a7102f71d0b54dbde0476dcd0ca01b4cff6bd13deb339b17" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:25.611686 systemd[1]: Started cri-containerd-2ec25cab8672ee0f0d2c50805e3423d96936a9e57575bcb3f356cb243d2fec2e.scope - libcontainer container 2ec25cab8672ee0f0d2c50805e3423d96936a9e57575bcb3f356cb243d2fec2e. Dec 16 12:45:25.613085 systemd[1]: Started cri-containerd-3b89c380a4aaa462532759fe1920b304490bfa86bbc6b8d4d9dccd660fd7dc33.scope - libcontainer container 3b89c380a4aaa462532759fe1920b304490bfa86bbc6b8d4d9dccd660fd7dc33. Dec 16 12:45:25.616414 systemd[1]: Started cri-containerd-118ff409c9d82d29b81d63ab332dee7da840c6091acf7a11ff3a2ecf8b3462f3.scope - libcontainer container 118ff409c9d82d29b81d63ab332dee7da840c6091acf7a11ff3a2ecf8b3462f3. Dec 16 12:45:25.665127 containerd[1496]: time="2025-12-16T12:45:25.664985149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"118ff409c9d82d29b81d63ab332dee7da840c6091acf7a11ff3a2ecf8b3462f3\"" Dec 16 12:45:25.670859 kubelet[2293]: E1216 12:45:25.670821 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:25.675323 containerd[1496]: time="2025-12-16T12:45:25.675268848Z" level=info msg="CreateContainer within sandbox \"118ff409c9d82d29b81d63ab332dee7da840c6091acf7a11ff3a2ecf8b3462f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:45:25.676504 containerd[1496]: time="2025-12-16T12:45:25.676463196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e38d14adeff457e72b5da13e7659088a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ec25cab8672ee0f0d2c50805e3423d96936a9e57575bcb3f356cb243d2fec2e\"" Dec 16 12:45:25.677166 kubelet[2293]: E1216 12:45:25.677139 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:25.677668 containerd[1496]: time="2025-12-16T12:45:25.677626577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b89c380a4aaa462532759fe1920b304490bfa86bbc6b8d4d9dccd660fd7dc33\"" Dec 16 12:45:25.678884 kubelet[2293]: E1216 12:45:25.678858 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:25.684093 containerd[1496]: time="2025-12-16T12:45:25.684053279Z" level=info msg="CreateContainer within sandbox \"2ec25cab8672ee0f0d2c50805e3423d96936a9e57575bcb3f356cb243d2fec2e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:45:25.685536 containerd[1496]: time="2025-12-16T12:45:25.685498444Z" level=info msg="CreateContainer within sandbox \"3b89c380a4aaa462532759fe1920b304490bfa86bbc6b8d4d9dccd660fd7dc33\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:45:25.693920 containerd[1496]: time="2025-12-16T12:45:25.693864434Z" level=info msg="Container 8424f2cade55ef717d6b45232d9c6af4debb2942d04a74010e9befe68122648d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:25.695916 containerd[1496]: time="2025-12-16T12:45:25.695868413Z" level=info msg="Container 43fdf6e1ac1b7f881d5558bbf08a4c23e2735f0e2076e2219f2a6d1a612ceca9: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:25.699758 containerd[1496]: time="2025-12-16T12:45:25.699723971Z" level=info msg="Container 81f719cd287ad791a29501f6295b773d293d6015523c77425d8bb69402353a18: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:25.704638 containerd[1496]: time="2025-12-16T12:45:25.704597941Z" level=info msg="CreateContainer within sandbox \"118ff409c9d82d29b81d63ab332dee7da840c6091acf7a11ff3a2ecf8b3462f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8424f2cade55ef717d6b45232d9c6af4debb2942d04a74010e9befe68122648d\"" Dec 16 12:45:25.705410 containerd[1496]: time="2025-12-16T12:45:25.705383877Z" level=info msg="StartContainer for \"8424f2cade55ef717d6b45232d9c6af4debb2942d04a74010e9befe68122648d\"" Dec 16 12:45:25.706740 containerd[1496]: time="2025-12-16T12:45:25.706710966Z" level=info msg="connecting to shim 8424f2cade55ef717d6b45232d9c6af4debb2942d04a74010e9befe68122648d" address="unix:///run/containerd/s/a0c67611a465a7d83996861f432e7347105d0b56fc342bc3b8006e3f71dfeaff" protocol=ttrpc version=3 Dec 16 12:45:25.709551 containerd[1496]: time="2025-12-16T12:45:25.709508273Z" level=info msg="CreateContainer within sandbox \"3b89c380a4aaa462532759fe1920b304490bfa86bbc6b8d4d9dccd660fd7dc33\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"43fdf6e1ac1b7f881d5558bbf08a4c23e2735f0e2076e2219f2a6d1a612ceca9\"" Dec 16 12:45:25.710189 containerd[1496]: time="2025-12-16T12:45:25.710161149Z" level=info msg="StartContainer for \"43fdf6e1ac1b7f881d5558bbf08a4c23e2735f0e2076e2219f2a6d1a612ceca9\"" Dec 16 12:45:25.710872 containerd[1496]: time="2025-12-16T12:45:25.710831526Z" level=info msg="CreateContainer within sandbox \"2ec25cab8672ee0f0d2c50805e3423d96936a9e57575bcb3f356cb243d2fec2e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"81f719cd287ad791a29501f6295b773d293d6015523c77425d8bb69402353a18\"" Dec 16 12:45:25.711533 containerd[1496]: time="2025-12-16T12:45:25.711506578Z" level=info msg="StartContainer for \"81f719cd287ad791a29501f6295b773d293d6015523c77425d8bb69402353a18\"" Dec 16 12:45:25.712580 containerd[1496]: time="2025-12-16T12:45:25.712554999Z" level=info msg="connecting to shim 81f719cd287ad791a29501f6295b773d293d6015523c77425d8bb69402353a18" address="unix:///run/containerd/s/c27068872078e8616e9c6fa7375d716d4af256beb6d7051ce9e264b16b39df53" protocol=ttrpc version=3 Dec 16 12:45:25.713139 containerd[1496]: time="2025-12-16T12:45:25.713112775Z" level=info msg="connecting to shim 43fdf6e1ac1b7f881d5558bbf08a4c23e2735f0e2076e2219f2a6d1a612ceca9" address="unix:///run/containerd/s/c24766e8929a1bd2a7102f71d0b54dbde0476dcd0ca01b4cff6bd13deb339b17" protocol=ttrpc version=3 Dec 16 12:45:25.732720 systemd[1]: Started cri-containerd-8424f2cade55ef717d6b45232d9c6af4debb2942d04a74010e9befe68122648d.scope - libcontainer container 8424f2cade55ef717d6b45232d9c6af4debb2942d04a74010e9befe68122648d. Dec 16 12:45:25.737457 systemd[1]: Started cri-containerd-43fdf6e1ac1b7f881d5558bbf08a4c23e2735f0e2076e2219f2a6d1a612ceca9.scope - libcontainer container 43fdf6e1ac1b7f881d5558bbf08a4c23e2735f0e2076e2219f2a6d1a612ceca9. Dec 16 12:45:25.738559 systemd[1]: Started cri-containerd-81f719cd287ad791a29501f6295b773d293d6015523c77425d8bb69402353a18.scope - libcontainer container 81f719cd287ad791a29501f6295b773d293d6015523c77425d8bb69402353a18. Dec 16 12:45:25.751266 kubelet[2293]: E1216 12:45:25.751219 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:45:25.780099 kubelet[2293]: I1216 12:45:25.780021 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:45:25.780459 kubelet[2293]: E1216 12:45:25.780387 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Dec 16 12:45:25.789097 containerd[1496]: time="2025-12-16T12:45:25.788967972Z" level=info msg="StartContainer for \"8424f2cade55ef717d6b45232d9c6af4debb2942d04a74010e9befe68122648d\" returns successfully" Dec 16 12:45:25.791622 containerd[1496]: time="2025-12-16T12:45:25.791244745Z" level=info msg="StartContainer for \"81f719cd287ad791a29501f6295b773d293d6015523c77425d8bb69402353a18\" returns successfully" Dec 16 12:45:25.802787 containerd[1496]: time="2025-12-16T12:45:25.802712562Z" level=info msg="StartContainer for \"43fdf6e1ac1b7f881d5558bbf08a4c23e2735f0e2076e2219f2a6d1a612ceca9\" returns successfully" Dec 16 12:45:25.860345 kubelet[2293]: E1216 12:45:25.860312 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:45:25.861980 kubelet[2293]: E1216 12:45:25.861824 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:25.864342 kubelet[2293]: E1216 12:45:25.864309 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:45:25.867467 kubelet[2293]: E1216 12:45:25.867321 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:25.871928 kubelet[2293]: E1216 12:45:25.871889 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:45:25.872528 kubelet[2293]: E1216 12:45:25.872404 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:25.887847 kubelet[2293]: E1216 12:45:25.887800 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:45:26.583615 kubelet[2293]: I1216 12:45:26.583579 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:45:26.872828 kubelet[2293]: E1216 12:45:26.872711 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:45:26.872937 kubelet[2293]: E1216 12:45:26.872851 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:26.873613 kubelet[2293]: E1216 12:45:26.873581 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:45:26.873744 kubelet[2293]: E1216 12:45:26.873722 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:27.205358 kubelet[2293]: E1216 12:45:27.205185 2293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:45:27.304126 kubelet[2293]: I1216 12:45:27.304059 2293 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:45:27.304126 kubelet[2293]: E1216 12:45:27.304108 2293 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:45:27.330455 kubelet[2293]: E1216 12:45:27.330348 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:27.431528 kubelet[2293]: E1216 12:45:27.431432 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:27.532409 kubelet[2293]: E1216 12:45:27.532001 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:27.632775 kubelet[2293]: E1216 12:45:27.632722 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:27.733177 kubelet[2293]: E1216 12:45:27.733125 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:27.834356 kubelet[2293]: E1216 12:45:27.834240 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:27.928214 kubelet[2293]: I1216 12:45:27.928171 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:45:27.934256 kubelet[2293]: E1216 12:45:27.934059 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:45:27.934256 kubelet[2293]: I1216 12:45:27.934095 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:27.936489 kubelet[2293]: E1216 12:45:27.936357 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:27.938265 kubelet[2293]: I1216 12:45:27.938242 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:27.940439 kubelet[2293]: E1216 12:45:27.940388 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:28.812355 kubelet[2293]: I1216 12:45:28.812278 2293 apiserver.go:52] "Watching apiserver" Dec 16 12:45:28.826206 kubelet[2293]: I1216 12:45:28.826139 2293 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:45:29.493686 systemd[1]: Reload requested from client PID 2578 ('systemctl') (unit session-7.scope)... Dec 16 12:45:29.493700 systemd[1]: Reloading... Dec 16 12:45:29.553484 zram_generator::config[2624]: No configuration found. Dec 16 12:45:29.724381 systemd[1]: Reloading finished in 230 ms. Dec 16 12:45:29.752532 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:29.762501 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:45:29.762751 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:29.762810 systemd[1]: kubelet.service: Consumed 1.147s CPU time, 128M memory peak. Dec 16 12:45:29.765020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:29.909943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:29.926861 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:45:29.965540 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:29.965540 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:45:29.965540 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:29.965866 kubelet[2663]: I1216 12:45:29.965602 2663 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:45:29.971181 kubelet[2663]: I1216 12:45:29.971138 2663 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:45:29.971181 kubelet[2663]: I1216 12:45:29.971170 2663 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:45:29.971401 kubelet[2663]: I1216 12:45:29.971371 2663 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:45:29.972589 kubelet[2663]: I1216 12:45:29.972563 2663 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:45:29.974994 kubelet[2663]: I1216 12:45:29.974960 2663 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:45:29.980236 kubelet[2663]: I1216 12:45:29.980196 2663 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:45:29.983284 kubelet[2663]: I1216 12:45:29.983251 2663 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:45:29.983523 kubelet[2663]: I1216 12:45:29.983484 2663 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:45:29.983675 kubelet[2663]: I1216 12:45:29.983521 2663 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:45:29.983748 kubelet[2663]: I1216 12:45:29.983685 2663 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:45:29.983748 kubelet[2663]: I1216 12:45:29.983695 2663 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:45:29.983748 kubelet[2663]: I1216 12:45:29.983736 2663 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:29.983890 kubelet[2663]: I1216 12:45:29.983879 2663 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:45:29.983920 kubelet[2663]: I1216 12:45:29.983892 2663 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:45:29.983920 kubelet[2663]: I1216 12:45:29.983914 2663 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:45:29.983967 kubelet[2663]: I1216 12:45:29.983929 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:45:29.985270 kubelet[2663]: I1216 12:45:29.985229 2663 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:45:29.985861 kubelet[2663]: I1216 12:45:29.985830 2663 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:45:29.989930 kubelet[2663]: I1216 12:45:29.989910 2663 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:45:29.989996 kubelet[2663]: I1216 12:45:29.989970 2663 server.go:1289] "Started kubelet" Dec 16 12:45:29.990133 kubelet[2663]: I1216 12:45:29.990077 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:45:29.990339 kubelet[2663]: I1216 12:45:29.990322 2663 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:45:29.990392 kubelet[2663]: I1216 12:45:29.990374 2663 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:45:29.991224 kubelet[2663]: I1216 12:45:29.991180 2663 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:45:29.992956 kubelet[2663]: I1216 12:45:29.992343 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:45:29.999228 kubelet[2663]: I1216 12:45:29.998183 2663 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:45:30.001183 kubelet[2663]: I1216 12:45:30.000885 2663 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:45:30.004837 kubelet[2663]: E1216 12:45:30.004722 2663 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:45:30.006474 kubelet[2663]: I1216 12:45:30.005132 2663 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:45:30.006474 kubelet[2663]: I1216 12:45:30.005300 2663 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:45:30.012480 kubelet[2663]: I1216 12:45:30.010011 2663 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:45:30.012480 kubelet[2663]: I1216 12:45:30.010229 2663 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:45:30.013473 kubelet[2663]: E1216 12:45:30.012956 2663 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:45:30.013909 kubelet[2663]: I1216 12:45:30.013791 2663 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:45:30.018482 kubelet[2663]: I1216 12:45:30.013453 2663 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:45:30.035265 kubelet[2663]: I1216 12:45:30.035222 2663 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:45:30.035265 kubelet[2663]: I1216 12:45:30.035253 2663 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:45:30.035265 kubelet[2663]: I1216 12:45:30.035276 2663 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:45:30.035622 kubelet[2663]: I1216 12:45:30.035282 2663 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:45:30.035622 kubelet[2663]: E1216 12:45:30.035327 2663 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:45:30.061813 kubelet[2663]: I1216 12:45:30.061782 2663 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:45:30.061813 kubelet[2663]: I1216 12:45:30.061802 2663 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:45:30.061813 kubelet[2663]: I1216 12:45:30.061822 2663 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:30.062025 kubelet[2663]: I1216 12:45:30.061946 2663 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:45:30.062025 kubelet[2663]: I1216 12:45:30.061956 2663 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:45:30.062025 kubelet[2663]: I1216 12:45:30.061971 2663 policy_none.go:49] "None policy: Start" Dec 16 12:45:30.062025 kubelet[2663]: I1216 12:45:30.061979 2663 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:45:30.062025 kubelet[2663]: I1216 12:45:30.061988 2663 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:45:30.062133 kubelet[2663]: I1216 12:45:30.062095 2663 state_mem.go:75] "Updated machine memory state" Dec 16 12:45:30.068148 kubelet[2663]: E1216 12:45:30.067910 2663 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:45:30.068148 kubelet[2663]: I1216 12:45:30.068111 2663 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:45:30.068148 kubelet[2663]: I1216 12:45:30.068123 2663 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:45:30.068829 kubelet[2663]: I1216 12:45:30.068588 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:45:30.073167 kubelet[2663]: E1216 12:45:30.073135 2663 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:45:30.136408 kubelet[2663]: I1216 12:45:30.136368 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:45:30.136564 kubelet[2663]: I1216 12:45:30.136553 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:30.136709 kubelet[2663]: I1216 12:45:30.136692 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:30.172091 kubelet[2663]: I1216 12:45:30.172051 2663 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:45:30.188753 kubelet[2663]: I1216 12:45:30.188715 2663 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:45:30.189542 kubelet[2663]: I1216 12:45:30.188952 2663 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:45:30.205785 kubelet[2663]: I1216 12:45:30.205723 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e38d14adeff457e72b5da13e7659088a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e38d14adeff457e72b5da13e7659088a\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:30.205936 kubelet[2663]: I1216 12:45:30.205815 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:30.205936 kubelet[2663]: I1216 12:45:30.205841 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:30.205936 kubelet[2663]: I1216 12:45:30.205861 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:30.205936 kubelet[2663]: I1216 12:45:30.205878 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:45:30.205936 kubelet[2663]: I1216 12:45:30.205892 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e38d14adeff457e72b5da13e7659088a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e38d14adeff457e72b5da13e7659088a\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:30.206039 kubelet[2663]: I1216 12:45:30.205906 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e38d14adeff457e72b5da13e7659088a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e38d14adeff457e72b5da13e7659088a\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:30.206039 kubelet[2663]: I1216 12:45:30.205919 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:30.206039 kubelet[2663]: I1216 12:45:30.205933 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:45:30.448235 kubelet[2663]: E1216 12:45:30.448087 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:30.457378 kubelet[2663]: E1216 12:45:30.457207 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:30.457882 kubelet[2663]: E1216 12:45:30.457768 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:30.985142 kubelet[2663]: I1216 12:45:30.985085 2663 apiserver.go:52] "Watching apiserver" Dec 16 12:45:31.005894 kubelet[2663]: I1216 12:45:31.005858 2663 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:45:31.052862 kubelet[2663]: I1216 12:45:31.052184 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:31.053726 kubelet[2663]: I1216 12:45:31.052310 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:45:31.053726 kubelet[2663]: E1216 12:45:31.053092 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:31.062697 kubelet[2663]: E1216 12:45:31.062348 2663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:45:31.062697 kubelet[2663]: E1216 12:45:31.062600 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:31.062997 kubelet[2663]: E1216 12:45:31.062861 2663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:45:31.063092 kubelet[2663]: E1216 12:45:31.063023 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:31.097896 kubelet[2663]: I1216 12:45:31.097819 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.097797885 podStartE2EDuration="1.097797885s" podCreationTimestamp="2025-12-16 12:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:31.082343992 +0000 UTC m=+1.151535452" watchObservedRunningTime="2025-12-16 12:45:31.097797885 +0000 UTC m=+1.166989305" Dec 16 12:45:31.098056 kubelet[2663]: I1216 12:45:31.097979 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.097972803 podStartE2EDuration="1.097972803s" podCreationTimestamp="2025-12-16 12:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:31.095219218 +0000 UTC m=+1.164410678" watchObservedRunningTime="2025-12-16 12:45:31.097972803 +0000 UTC m=+1.167164263" Dec 16 12:45:31.125566 kubelet[2663]: I1216 12:45:31.124567 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.124548986 podStartE2EDuration="1.124548986s" podCreationTimestamp="2025-12-16 12:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:31.110464849 +0000 UTC m=+1.179656309" watchObservedRunningTime="2025-12-16 12:45:31.124548986 +0000 UTC m=+1.193740406" Dec 16 12:45:32.053944 kubelet[2663]: E1216 12:45:32.053909 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:32.054261 kubelet[2663]: E1216 12:45:32.053982 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:33.055989 kubelet[2663]: E1216 12:45:33.055959 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:33.056332 kubelet[2663]: E1216 12:45:33.056009 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:34.917743 kubelet[2663]: I1216 12:45:34.917715 2663 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:45:34.918217 containerd[1496]: time="2025-12-16T12:45:34.918183749Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:45:34.918456 kubelet[2663]: I1216 12:45:34.918423 2663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:45:35.842854 systemd[1]: Created slice kubepods-besteffort-podb6da172a_2d45_4dfd_ab95_01e8732f1576.slice - libcontainer container kubepods-besteffort-podb6da172a_2d45_4dfd_ab95_01e8732f1576.slice. Dec 16 12:45:35.844579 kubelet[2663]: I1216 12:45:35.843618 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b6da172a-2d45-4dfd-ab95-01e8732f1576-kube-proxy\") pod \"kube-proxy-hqb48\" (UID: \"b6da172a-2d45-4dfd-ab95-01e8732f1576\") " pod="kube-system/kube-proxy-hqb48" Dec 16 12:45:35.844579 kubelet[2663]: I1216 12:45:35.843649 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6da172a-2d45-4dfd-ab95-01e8732f1576-xtables-lock\") pod \"kube-proxy-hqb48\" (UID: \"b6da172a-2d45-4dfd-ab95-01e8732f1576\") " pod="kube-system/kube-proxy-hqb48" Dec 16 12:45:35.844579 kubelet[2663]: I1216 12:45:35.843724 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6da172a-2d45-4dfd-ab95-01e8732f1576-lib-modules\") pod \"kube-proxy-hqb48\" (UID: \"b6da172a-2d45-4dfd-ab95-01e8732f1576\") " pod="kube-system/kube-proxy-hqb48" Dec 16 12:45:35.844579 kubelet[2663]: I1216 12:45:35.843743 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwxhq\" (UniqueName: \"kubernetes.io/projected/b6da172a-2d45-4dfd-ab95-01e8732f1576-kube-api-access-lwxhq\") pod \"kube-proxy-hqb48\" (UID: \"b6da172a-2d45-4dfd-ab95-01e8732f1576\") " pod="kube-system/kube-proxy-hqb48" Dec 16 12:45:36.010902 systemd[1]: Created slice kubepods-besteffort-poda6dd1354_5e99_46e1_a5ed_77e1143acb05.slice - libcontainer container kubepods-besteffort-poda6dd1354_5e99_46e1_a5ed_77e1143acb05.slice. Dec 16 12:45:36.046153 kubelet[2663]: I1216 12:45:36.046082 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7c42\" (UniqueName: \"kubernetes.io/projected/a6dd1354-5e99-46e1-a5ed-77e1143acb05-kube-api-access-j7c42\") pod \"tigera-operator-7dcd859c48-c8px6\" (UID: \"a6dd1354-5e99-46e1-a5ed-77e1143acb05\") " pod="tigera-operator/tigera-operator-7dcd859c48-c8px6" Dec 16 12:45:36.046562 kubelet[2663]: I1216 12:45:36.046169 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a6dd1354-5e99-46e1-a5ed-77e1143acb05-var-lib-calico\") pod \"tigera-operator-7dcd859c48-c8px6\" (UID: \"a6dd1354-5e99-46e1-a5ed-77e1143acb05\") " pod="tigera-operator/tigera-operator-7dcd859c48-c8px6" Dec 16 12:45:36.152555 kubelet[2663]: E1216 12:45:36.152406 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:36.153783 containerd[1496]: time="2025-12-16T12:45:36.153704817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hqb48,Uid:b6da172a-2d45-4dfd-ab95-01e8732f1576,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:36.181678 containerd[1496]: time="2025-12-16T12:45:36.181618471Z" level=info msg="connecting to shim 7ecb5549e29faa15326f5b400eb1c2f861db9061a57dfb2e46b3d7ba178ae79f" address="unix:///run/containerd/s/fe82dd3aa5475bb8821ce9d9aff103f6b611d0e9f47ae4d441d152fd7a850146" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:36.209686 systemd[1]: Started cri-containerd-7ecb5549e29faa15326f5b400eb1c2f861db9061a57dfb2e46b3d7ba178ae79f.scope - libcontainer container 7ecb5549e29faa15326f5b400eb1c2f861db9061a57dfb2e46b3d7ba178ae79f. Dec 16 12:45:36.235292 containerd[1496]: time="2025-12-16T12:45:36.235251449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hqb48,Uid:b6da172a-2d45-4dfd-ab95-01e8732f1576,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ecb5549e29faa15326f5b400eb1c2f861db9061a57dfb2e46b3d7ba178ae79f\"" Dec 16 12:45:36.236199 kubelet[2663]: E1216 12:45:36.236171 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:36.240971 containerd[1496]: time="2025-12-16T12:45:36.240935400Z" level=info msg="CreateContainer within sandbox \"7ecb5549e29faa15326f5b400eb1c2f861db9061a57dfb2e46b3d7ba178ae79f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:45:36.252472 containerd[1496]: time="2025-12-16T12:45:36.252204618Z" level=info msg="Container f394ad95f713aceddea49dd61ea1786839751e6654f9873e297136d43d7069e2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:36.259461 containerd[1496]: time="2025-12-16T12:45:36.259387750Z" level=info msg="CreateContainer within sandbox \"7ecb5549e29faa15326f5b400eb1c2f861db9061a57dfb2e46b3d7ba178ae79f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f394ad95f713aceddea49dd61ea1786839751e6654f9873e297136d43d7069e2\"" Dec 16 12:45:36.260388 containerd[1496]: time="2025-12-16T12:45:36.260308027Z" level=info msg="StartContainer for \"f394ad95f713aceddea49dd61ea1786839751e6654f9873e297136d43d7069e2\"" Dec 16 12:45:36.262312 containerd[1496]: time="2025-12-16T12:45:36.262277747Z" level=info msg="connecting to shim f394ad95f713aceddea49dd61ea1786839751e6654f9873e297136d43d7069e2" address="unix:///run/containerd/s/fe82dd3aa5475bb8821ce9d9aff103f6b611d0e9f47ae4d441d152fd7a850146" protocol=ttrpc version=3 Dec 16 12:45:36.281648 systemd[1]: Started cri-containerd-f394ad95f713aceddea49dd61ea1786839751e6654f9873e297136d43d7069e2.scope - libcontainer container f394ad95f713aceddea49dd61ea1786839751e6654f9873e297136d43d7069e2. Dec 16 12:45:36.315181 containerd[1496]: time="2025-12-16T12:45:36.315127333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-c8px6,Uid:a6dd1354-5e99-46e1-a5ed-77e1143acb05,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:45:36.330046 containerd[1496]: time="2025-12-16T12:45:36.329991857Z" level=info msg="connecting to shim 8943543b384ecfd203d423dcb03e91dae58f2937b90a857e1904fbb8998f64d9" address="unix:///run/containerd/s/114146b981f3b53ff59d4da7fd419fc9d6bfd29c431da1bc253f2f2aa074e3c9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:36.354122 systemd[1]: Started cri-containerd-8943543b384ecfd203d423dcb03e91dae58f2937b90a857e1904fbb8998f64d9.scope - libcontainer container 8943543b384ecfd203d423dcb03e91dae58f2937b90a857e1904fbb8998f64d9. Dec 16 12:45:36.361865 containerd[1496]: time="2025-12-16T12:45:36.361822630Z" level=info msg="StartContainer for \"f394ad95f713aceddea49dd61ea1786839751e6654f9873e297136d43d7069e2\" returns successfully" Dec 16 12:45:36.395754 containerd[1496]: time="2025-12-16T12:45:36.395706046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-c8px6,Uid:a6dd1354-5e99-46e1-a5ed-77e1143acb05,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8943543b384ecfd203d423dcb03e91dae58f2937b90a857e1904fbb8998f64d9\"" Dec 16 12:45:36.399927 containerd[1496]: time="2025-12-16T12:45:36.399882216Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:45:37.064864 kubelet[2663]: E1216 12:45:37.064828 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:37.075153 kubelet[2663]: I1216 12:45:37.075089 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hqb48" podStartSLOduration=2.075071715 podStartE2EDuration="2.075071715s" podCreationTimestamp="2025-12-16 12:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:37.074854027 +0000 UTC m=+7.144045487" watchObservedRunningTime="2025-12-16 12:45:37.075071715 +0000 UTC m=+7.144263175" Dec 16 12:45:37.610702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567814592.mount: Deactivated successfully. Dec 16 12:45:38.327226 kubelet[2663]: E1216 12:45:38.327140 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:38.390195 containerd[1496]: time="2025-12-16T12:45:38.390148096Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:38.390619 containerd[1496]: time="2025-12-16T12:45:38.390591392Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 16 12:45:38.391587 containerd[1496]: time="2025-12-16T12:45:38.391546386Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:38.395227 containerd[1496]: time="2025-12-16T12:45:38.395178119Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:38.396503 containerd[1496]: time="2025-12-16T12:45:38.396468606Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.996546068s" Dec 16 12:45:38.396503 containerd[1496]: time="2025-12-16T12:45:38.396501207Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:45:38.402197 containerd[1496]: time="2025-12-16T12:45:38.402161893Z" level=info msg="CreateContainer within sandbox \"8943543b384ecfd203d423dcb03e91dae58f2937b90a857e1904fbb8998f64d9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:45:38.407817 containerd[1496]: time="2025-12-16T12:45:38.407778777Z" level=info msg="Container 656a31ebfe2beaadc1f1d93a884b8e523fd80aa5ee2054b80446ae0c09e6ee27: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:38.413247 containerd[1496]: time="2025-12-16T12:45:38.413186774Z" level=info msg="CreateContainer within sandbox \"8943543b384ecfd203d423dcb03e91dae58f2937b90a857e1904fbb8998f64d9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"656a31ebfe2beaadc1f1d93a884b8e523fd80aa5ee2054b80446ae0c09e6ee27\"" Dec 16 12:45:38.414538 containerd[1496]: time="2025-12-16T12:45:38.414508702Z" level=info msg="StartContainer for \"656a31ebfe2beaadc1f1d93a884b8e523fd80aa5ee2054b80446ae0c09e6ee27\"" Dec 16 12:45:38.415613 containerd[1496]: time="2025-12-16T12:45:38.415573061Z" level=info msg="connecting to shim 656a31ebfe2beaadc1f1d93a884b8e523fd80aa5ee2054b80446ae0c09e6ee27" address="unix:///run/containerd/s/114146b981f3b53ff59d4da7fd419fc9d6bfd29c431da1bc253f2f2aa074e3c9" protocol=ttrpc version=3 Dec 16 12:45:38.438079 systemd[1]: Started cri-containerd-656a31ebfe2beaadc1f1d93a884b8e523fd80aa5ee2054b80446ae0c09e6ee27.scope - libcontainer container 656a31ebfe2beaadc1f1d93a884b8e523fd80aa5ee2054b80446ae0c09e6ee27. Dec 16 12:45:38.467047 containerd[1496]: time="2025-12-16T12:45:38.467006292Z" level=info msg="StartContainer for \"656a31ebfe2beaadc1f1d93a884b8e523fd80aa5ee2054b80446ae0c09e6ee27\" returns successfully" Dec 16 12:45:39.072151 kubelet[2663]: E1216 12:45:39.072121 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:39.085954 kubelet[2663]: I1216 12:45:39.085886 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-c8px6" podStartSLOduration=2.084666918 podStartE2EDuration="4.085867244s" podCreationTimestamp="2025-12-16 12:45:35 +0000 UTC" firstStartedPulling="2025-12-16 12:45:36.398131984 +0000 UTC m=+6.467323444" lastFinishedPulling="2025-12-16 12:45:38.39933231 +0000 UTC m=+8.468523770" observedRunningTime="2025-12-16 12:45:39.085820203 +0000 UTC m=+9.155011663" watchObservedRunningTime="2025-12-16 12:45:39.085867244 +0000 UTC m=+9.155058704" Dec 16 12:45:39.556856 kubelet[2663]: E1216 12:45:39.556277 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:40.073963 kubelet[2663]: E1216 12:45:40.073917 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:40.077164 kubelet[2663]: E1216 12:45:40.077088 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:42.442524 kubelet[2663]: E1216 12:45:42.442248 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:43.913054 sudo[1716]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:43.915466 sshd[1715]: Connection closed by 10.0.0.1 port 41504 Dec 16 12:45:43.916115 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:43.921296 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:41504.service: Deactivated successfully. Dec 16 12:45:43.925369 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:45:43.925831 systemd[1]: session-7.scope: Consumed 6.544s CPU time, 219M memory peak. Dec 16 12:45:43.926937 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:45:43.928707 systemd-logind[1482]: Removed session 7. Dec 16 12:45:45.483567 update_engine[1485]: I20251216 12:45:45.483486 1485 update_attempter.cc:509] Updating boot flags... Dec 16 12:45:52.660136 systemd[1]: Created slice kubepods-besteffort-pod4ff45548_e42f_4c03_a911_914ae689f172.slice - libcontainer container kubepods-besteffort-pod4ff45548_e42f_4c03_a911_914ae689f172.slice. Dec 16 12:45:52.764930 kubelet[2663]: I1216 12:45:52.764872 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4ff45548-e42f-4c03-a911-914ae689f172-typha-certs\") pod \"calico-typha-74567d8bb6-6wnsz\" (UID: \"4ff45548-e42f-4c03-a911-914ae689f172\") " pod="calico-system/calico-typha-74567d8bb6-6wnsz" Dec 16 12:45:52.764930 kubelet[2663]: I1216 12:45:52.764929 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ff45548-e42f-4c03-a911-914ae689f172-tigera-ca-bundle\") pod \"calico-typha-74567d8bb6-6wnsz\" (UID: \"4ff45548-e42f-4c03-a911-914ae689f172\") " pod="calico-system/calico-typha-74567d8bb6-6wnsz" Dec 16 12:45:52.765325 kubelet[2663]: I1216 12:45:52.764960 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjdg2\" (UniqueName: \"kubernetes.io/projected/4ff45548-e42f-4c03-a911-914ae689f172-kube-api-access-vjdg2\") pod \"calico-typha-74567d8bb6-6wnsz\" (UID: \"4ff45548-e42f-4c03-a911-914ae689f172\") " pod="calico-system/calico-typha-74567d8bb6-6wnsz" Dec 16 12:45:52.833767 systemd[1]: Created slice kubepods-besteffort-pod9e625d1d_21f5_4a0a_991c_76f807678d6d.slice - libcontainer container kubepods-besteffort-pod9e625d1d_21f5_4a0a_991c_76f807678d6d.slice. Dec 16 12:45:52.865746 kubelet[2663]: I1216 12:45:52.865684 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9e625d1d-21f5-4a0a-991c-76f807678d6d-node-certs\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.865746 kubelet[2663]: I1216 12:45:52.865734 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9e625d1d-21f5-4a0a-991c-76f807678d6d-cni-log-dir\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.865921 kubelet[2663]: I1216 12:45:52.865762 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9e625d1d-21f5-4a0a-991c-76f807678d6d-flexvol-driver-host\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.865921 kubelet[2663]: I1216 12:45:52.865833 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e625d1d-21f5-4a0a-991c-76f807678d6d-xtables-lock\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.865921 kubelet[2663]: I1216 12:45:52.865883 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9e625d1d-21f5-4a0a-991c-76f807678d6d-cni-bin-dir\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.865921 kubelet[2663]: I1216 12:45:52.865904 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e625d1d-21f5-4a0a-991c-76f807678d6d-tigera-ca-bundle\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.866014 kubelet[2663]: I1216 12:45:52.865924 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e625d1d-21f5-4a0a-991c-76f807678d6d-lib-modules\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.866014 kubelet[2663]: I1216 12:45:52.865939 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e625d1d-21f5-4a0a-991c-76f807678d6d-var-lib-calico\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.866014 kubelet[2663]: I1216 12:45:52.865956 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9e625d1d-21f5-4a0a-991c-76f807678d6d-cni-net-dir\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.866014 kubelet[2663]: I1216 12:45:52.865971 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9e625d1d-21f5-4a0a-991c-76f807678d6d-var-run-calico\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.866014 kubelet[2663]: I1216 12:45:52.866009 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9e625d1d-21f5-4a0a-991c-76f807678d6d-policysync\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.866128 kubelet[2663]: I1216 12:45:52.866026 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wddh\" (UniqueName: \"kubernetes.io/projected/9e625d1d-21f5-4a0a-991c-76f807678d6d-kube-api-access-8wddh\") pod \"calico-node-86qsz\" (UID: \"9e625d1d-21f5-4a0a-991c-76f807678d6d\") " pod="calico-system/calico-node-86qsz" Dec 16 12:45:52.964700 kubelet[2663]: E1216 12:45:52.964658 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:52.965496 containerd[1496]: time="2025-12-16T12:45:52.965459030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74567d8bb6-6wnsz,Uid:4ff45548-e42f-4c03-a911-914ae689f172,Namespace:calico-system,Attempt:0,}" Dec 16 12:45:52.973386 kubelet[2663]: E1216 12:45:52.970556 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:52.973386 kubelet[2663]: W1216 12:45:52.970584 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:52.973386 kubelet[2663]: E1216 12:45:52.973042 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:52.973386 kubelet[2663]: E1216 12:45:52.973331 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:52.973386 kubelet[2663]: W1216 12:45:52.973376 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:52.973386 kubelet[2663]: E1216 12:45:52.973394 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:52.973639 kubelet[2663]: E1216 12:45:52.973623 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:52.973639 kubelet[2663]: W1216 12:45:52.973634 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:52.973683 kubelet[2663]: E1216 12:45:52.973644 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:52.973956 kubelet[2663]: E1216 12:45:52.973937 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:52.973956 kubelet[2663]: W1216 12:45:52.973950 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:52.974014 kubelet[2663]: E1216 12:45:52.973961 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:52.976135 kubelet[2663]: E1216 12:45:52.976103 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:52.976135 kubelet[2663]: W1216 12:45:52.976130 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:52.976249 kubelet[2663]: E1216 12:45:52.976148 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:52.984695 kubelet[2663]: E1216 12:45:52.984657 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:52.984695 kubelet[2663]: W1216 12:45:52.984683 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:52.984837 kubelet[2663]: E1216 12:45:52.984706 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.030523 kubelet[2663]: E1216 12:45:53.030200 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:45:53.036589 containerd[1496]: time="2025-12-16T12:45:53.036531049Z" level=info msg="connecting to shim fcdba9602576b920d1bd250036222353a0178bb915cdbc8a5e42e034dabd0a15" address="unix:///run/containerd/s/0d4df50eda633d4d312d4e79f9e62c1e4299bc9a38c68f2c2c8001d9ee11ac3a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:53.041139 kubelet[2663]: E1216 12:45:53.040932 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.041252 kubelet[2663]: W1216 12:45:53.041132 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.041602 kubelet[2663]: E1216 12:45:53.041354 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.042488 kubelet[2663]: E1216 12:45:53.041794 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.047000 kubelet[2663]: W1216 12:45:53.041807 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.047648 kubelet[2663]: E1216 12:45:53.047104 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.048249 kubelet[2663]: E1216 12:45:53.048218 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.048249 kubelet[2663]: W1216 12:45:53.048242 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.048344 kubelet[2663]: E1216 12:45:53.048260 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.048690 kubelet[2663]: E1216 12:45:53.048483 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.048690 kubelet[2663]: W1216 12:45:53.048499 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.048690 kubelet[2663]: E1216 12:45:53.048510 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.048793 kubelet[2663]: E1216 12:45:53.048699 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.048793 kubelet[2663]: W1216 12:45:53.048708 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.048793 kubelet[2663]: E1216 12:45:53.048716 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.048855 kubelet[2663]: E1216 12:45:53.048845 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.048877 kubelet[2663]: W1216 12:45:53.048854 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.048877 kubelet[2663]: E1216 12:45:53.048862 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.049472 kubelet[2663]: E1216 12:45:53.049048 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.049472 kubelet[2663]: W1216 12:45:53.049060 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.049472 kubelet[2663]: E1216 12:45:53.049083 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.050020 kubelet[2663]: E1216 12:45:53.050001 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.050020 kubelet[2663]: W1216 12:45:53.050018 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.050196 kubelet[2663]: E1216 12:45:53.050055 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.050268 kubelet[2663]: E1216 12:45:53.050253 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.050268 kubelet[2663]: W1216 12:45:53.050266 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.050315 kubelet[2663]: E1216 12:45:53.050277 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.050440 kubelet[2663]: E1216 12:45:53.050427 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.050440 kubelet[2663]: W1216 12:45:53.050439 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.050529 kubelet[2663]: E1216 12:45:53.050465 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.050601 kubelet[2663]: E1216 12:45:53.050589 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.050601 kubelet[2663]: W1216 12:45:53.050599 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.050660 kubelet[2663]: E1216 12:45:53.050607 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.050751 kubelet[2663]: E1216 12:45:53.050740 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.050751 kubelet[2663]: W1216 12:45:53.050750 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.050802 kubelet[2663]: E1216 12:45:53.050758 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.050920 kubelet[2663]: E1216 12:45:53.050907 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.050920 kubelet[2663]: W1216 12:45:53.050918 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.050971 kubelet[2663]: E1216 12:45:53.050927 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.051050 kubelet[2663]: E1216 12:45:53.051038 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.051050 kubelet[2663]: W1216 12:45:53.051049 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.051111 kubelet[2663]: E1216 12:45:53.051057 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.051212 kubelet[2663]: E1216 12:45:53.051201 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.051212 kubelet[2663]: W1216 12:45:53.051211 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.051259 kubelet[2663]: E1216 12:45:53.051219 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.051347 kubelet[2663]: E1216 12:45:53.051337 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.051372 kubelet[2663]: W1216 12:45:53.051349 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.051372 kubelet[2663]: E1216 12:45:53.051357 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.051523 kubelet[2663]: E1216 12:45:53.051510 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.051523 kubelet[2663]: W1216 12:45:53.051521 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.051578 kubelet[2663]: E1216 12:45:53.051530 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.051654 kubelet[2663]: E1216 12:45:53.051643 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.051654 kubelet[2663]: W1216 12:45:53.051653 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.051707 kubelet[2663]: E1216 12:45:53.051660 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.051787 kubelet[2663]: E1216 12:45:53.051776 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.051808 kubelet[2663]: W1216 12:45:53.051788 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.051808 kubelet[2663]: E1216 12:45:53.051796 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.051923 kubelet[2663]: E1216 12:45:53.051913 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.051948 kubelet[2663]: W1216 12:45:53.051923 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.051948 kubelet[2663]: E1216 12:45:53.051931 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.069945 kubelet[2663]: E1216 12:45:53.069911 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.069945 kubelet[2663]: W1216 12:45:53.069934 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.069945 kubelet[2663]: E1216 12:45:53.069950 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.070237 kubelet[2663]: I1216 12:45:53.069985 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bd9dfc7d-59c3-4082-b547-c4b54eeb1dee-registration-dir\") pod \"csi-node-driver-575rn\" (UID: \"bd9dfc7d-59c3-4082-b547-c4b54eeb1dee\") " pod="calico-system/csi-node-driver-575rn" Dec 16 12:45:53.070343 kubelet[2663]: E1216 12:45:53.070326 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.070343 kubelet[2663]: W1216 12:45:53.070340 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.070584 kubelet[2663]: E1216 12:45:53.070351 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.070584 kubelet[2663]: I1216 12:45:53.070373 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bd9dfc7d-59c3-4082-b547-c4b54eeb1dee-varrun\") pod \"csi-node-driver-575rn\" (UID: \"bd9dfc7d-59c3-4082-b547-c4b54eeb1dee\") " pod="calico-system/csi-node-driver-575rn" Dec 16 12:45:53.070687 kubelet[2663]: E1216 12:45:53.070671 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.070744 kubelet[2663]: W1216 12:45:53.070731 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.070826 kubelet[2663]: E1216 12:45:53.070789 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.071216 kubelet[2663]: E1216 12:45:53.071087 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.071216 kubelet[2663]: W1216 12:45:53.071101 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.071216 kubelet[2663]: E1216 12:45:53.071112 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.071373 kubelet[2663]: E1216 12:45:53.071360 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.071421 kubelet[2663]: W1216 12:45:53.071411 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.071515 kubelet[2663]: E1216 12:45:53.071502 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.071593 kubelet[2663]: I1216 12:45:53.071579 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bd9dfc7d-59c3-4082-b547-c4b54eeb1dee-socket-dir\") pod \"csi-node-driver-575rn\" (UID: \"bd9dfc7d-59c3-4082-b547-c4b54eeb1dee\") " pod="calico-system/csi-node-driver-575rn" Dec 16 12:45:53.071814 kubelet[2663]: E1216 12:45:53.071751 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.071814 kubelet[2663]: W1216 12:45:53.071768 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.071814 kubelet[2663]: E1216 12:45:53.071779 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.071931 kubelet[2663]: E1216 12:45:53.071917 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.071931 kubelet[2663]: W1216 12:45:53.071928 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.072006 kubelet[2663]: E1216 12:45:53.071937 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.072117 kubelet[2663]: E1216 12:45:53.072103 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.072117 kubelet[2663]: W1216 12:45:53.072116 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.072180 kubelet[2663]: E1216 12:45:53.072126 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.072294 kubelet[2663]: E1216 12:45:53.072279 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.072294 kubelet[2663]: W1216 12:45:53.072293 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.072348 kubelet[2663]: E1216 12:45:53.072301 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.072471 kubelet[2663]: E1216 12:45:53.072460 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.072471 kubelet[2663]: W1216 12:45:53.072471 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.072531 kubelet[2663]: E1216 12:45:53.072480 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.072531 kubelet[2663]: I1216 12:45:53.072501 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x898d\" (UniqueName: \"kubernetes.io/projected/bd9dfc7d-59c3-4082-b547-c4b54eeb1dee-kube-api-access-x898d\") pod \"csi-node-driver-575rn\" (UID: \"bd9dfc7d-59c3-4082-b547-c4b54eeb1dee\") " pod="calico-system/csi-node-driver-575rn" Dec 16 12:45:53.072701 kubelet[2663]: E1216 12:45:53.072661 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.072701 kubelet[2663]: W1216 12:45:53.072672 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.072701 kubelet[2663]: E1216 12:45:53.072682 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.072701 kubelet[2663]: I1216 12:45:53.072701 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd9dfc7d-59c3-4082-b547-c4b54eeb1dee-kubelet-dir\") pod \"csi-node-driver-575rn\" (UID: \"bd9dfc7d-59c3-4082-b547-c4b54eeb1dee\") " pod="calico-system/csi-node-driver-575rn" Dec 16 12:45:53.072953 kubelet[2663]: E1216 12:45:53.072920 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.072953 kubelet[2663]: W1216 12:45:53.072940 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.072953 kubelet[2663]: E1216 12:45:53.072953 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.073200 kubelet[2663]: E1216 12:45:53.073184 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.073200 kubelet[2663]: W1216 12:45:53.073198 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.073296 kubelet[2663]: E1216 12:45:53.073209 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.073437 kubelet[2663]: E1216 12:45:53.073418 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.073437 kubelet[2663]: W1216 12:45:53.073430 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.073560 kubelet[2663]: E1216 12:45:53.073441 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.073786 kubelet[2663]: E1216 12:45:53.073764 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.073786 kubelet[2663]: W1216 12:45:53.073778 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.073843 kubelet[2663]: E1216 12:45:53.073794 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.084991 systemd[1]: Started cri-containerd-fcdba9602576b920d1bd250036222353a0178bb915cdbc8a5e42e034dabd0a15.scope - libcontainer container fcdba9602576b920d1bd250036222353a0178bb915cdbc8a5e42e034dabd0a15. Dec 16 12:45:53.136134 containerd[1496]: time="2025-12-16T12:45:53.136090614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74567d8bb6-6wnsz,Uid:4ff45548-e42f-4c03-a911-914ae689f172,Namespace:calico-system,Attempt:0,} returns sandbox id \"fcdba9602576b920d1bd250036222353a0178bb915cdbc8a5e42e034dabd0a15\"" Dec 16 12:45:53.137149 kubelet[2663]: E1216 12:45:53.137123 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:53.137854 containerd[1496]: time="2025-12-16T12:45:53.137816564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-86qsz,Uid:9e625d1d-21f5-4a0a-991c-76f807678d6d,Namespace:calico-system,Attempt:0,}" Dec 16 12:45:53.142163 kubelet[2663]: E1216 12:45:53.142134 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:53.148245 containerd[1496]: time="2025-12-16T12:45:53.148206144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:45:53.172920 containerd[1496]: time="2025-12-16T12:45:53.172848931Z" level=info msg="connecting to shim 17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29" address="unix:///run/containerd/s/35c0cc4f1289fcb07c3ea5d148996fdfa0aa810640e148b8247c2fb20a6e179d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:53.173610 kubelet[2663]: E1216 12:45:53.173571 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.173610 kubelet[2663]: W1216 12:45:53.173594 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.173754 kubelet[2663]: E1216 12:45:53.173615 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.173754 kubelet[2663]: E1216 12:45:53.173767 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.173824 kubelet[2663]: W1216 12:45:53.173774 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.173824 kubelet[2663]: E1216 12:45:53.173783 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.174017 kubelet[2663]: E1216 12:45:53.173996 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.174049 kubelet[2663]: W1216 12:45:53.174016 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.174049 kubelet[2663]: E1216 12:45:53.174033 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.174274 kubelet[2663]: E1216 12:45:53.174258 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.174274 kubelet[2663]: W1216 12:45:53.174270 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.174381 kubelet[2663]: E1216 12:45:53.174279 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.174586 kubelet[2663]: E1216 12:45:53.174430 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.174586 kubelet[2663]: W1216 12:45:53.174438 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.174586 kubelet[2663]: E1216 12:45:53.174457 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.174677 kubelet[2663]: E1216 12:45:53.174659 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.174677 kubelet[2663]: W1216 12:45:53.174668 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.174721 kubelet[2663]: E1216 12:45:53.174677 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.175016 kubelet[2663]: E1216 12:45:53.174843 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.175016 kubelet[2663]: W1216 12:45:53.174855 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.175016 kubelet[2663]: E1216 12:45:53.174863 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.175224 kubelet[2663]: E1216 12:45:53.175141 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.176160 kubelet[2663]: W1216 12:45:53.175153 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.176160 kubelet[2663]: E1216 12:45:53.176022 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.176485 kubelet[2663]: E1216 12:45:53.176323 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.176485 kubelet[2663]: W1216 12:45:53.176351 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.176485 kubelet[2663]: E1216 12:45:53.176364 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.176776 kubelet[2663]: E1216 12:45:53.176632 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.176776 kubelet[2663]: W1216 12:45:53.176647 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.176776 kubelet[2663]: E1216 12:45:53.176658 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.177136 kubelet[2663]: E1216 12:45:53.176945 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.177136 kubelet[2663]: W1216 12:45:53.176961 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.177136 kubelet[2663]: E1216 12:45:53.176983 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.177463 kubelet[2663]: E1216 12:45:53.177295 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.177463 kubelet[2663]: W1216 12:45:53.177309 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.177463 kubelet[2663]: E1216 12:45:53.177320 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.177640 kubelet[2663]: E1216 12:45:53.177626 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.177706 kubelet[2663]: W1216 12:45:53.177694 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.177763 kubelet[2663]: E1216 12:45:53.177745 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.178372 kubelet[2663]: E1216 12:45:53.178052 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.178578 kubelet[2663]: W1216 12:45:53.178557 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.178765 kubelet[2663]: E1216 12:45:53.178737 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.179299 kubelet[2663]: E1216 12:45:53.179280 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.179387 kubelet[2663]: W1216 12:45:53.179375 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.179439 kubelet[2663]: E1216 12:45:53.179428 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.179862 kubelet[2663]: E1216 12:45:53.179723 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.179862 kubelet[2663]: W1216 12:45:53.179738 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.179862 kubelet[2663]: E1216 12:45:53.179751 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.180236 kubelet[2663]: E1216 12:45:53.180218 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.180344 kubelet[2663]: W1216 12:45:53.180292 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.180409 kubelet[2663]: E1216 12:45:53.180395 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.181172 kubelet[2663]: E1216 12:45:53.180728 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.181297 kubelet[2663]: W1216 12:45:53.181277 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.181372 kubelet[2663]: E1216 12:45:53.181361 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.181671 kubelet[2663]: E1216 12:45:53.181642 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.181932 kubelet[2663]: W1216 12:45:53.181806 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.181932 kubelet[2663]: E1216 12:45:53.181828 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.182156 kubelet[2663]: E1216 12:45:53.182123 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.182355 kubelet[2663]: W1216 12:45:53.182217 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.182355 kubelet[2663]: E1216 12:45:53.182251 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.182519 kubelet[2663]: E1216 12:45:53.182495 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.182579 kubelet[2663]: W1216 12:45:53.182568 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.182694 kubelet[2663]: E1216 12:45:53.182633 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.183974 kubelet[2663]: E1216 12:45:53.183949 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.183974 kubelet[2663]: W1216 12:45:53.183970 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.184090 kubelet[2663]: E1216 12:45:53.183995 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.186311 kubelet[2663]: E1216 12:45:53.186290 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.186311 kubelet[2663]: W1216 12:45:53.186309 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.186439 kubelet[2663]: E1216 12:45:53.186324 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.186565 kubelet[2663]: E1216 12:45:53.186551 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.186565 kubelet[2663]: W1216 12:45:53.186562 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.187049 kubelet[2663]: E1216 12:45:53.186575 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.187761 kubelet[2663]: E1216 12:45:53.187737 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.187761 kubelet[2663]: W1216 12:45:53.187759 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.187843 kubelet[2663]: E1216 12:45:53.187775 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.203497 kubelet[2663]: E1216 12:45:53.202538 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:53.203497 kubelet[2663]: W1216 12:45:53.202568 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:53.203497 kubelet[2663]: E1216 12:45:53.202589 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:53.219664 systemd[1]: Started cri-containerd-17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29.scope - libcontainer container 17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29. Dec 16 12:45:53.249050 containerd[1496]: time="2025-12-16T12:45:53.248955650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-86qsz,Uid:9e625d1d-21f5-4a0a-991c-76f807678d6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29\"" Dec 16 12:45:53.249871 kubelet[2663]: E1216 12:45:53.249848 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:54.241285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215692062.mount: Deactivated successfully. Dec 16 12:45:54.838532 containerd[1496]: time="2025-12-16T12:45:54.838476740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:54.839761 containerd[1496]: time="2025-12-16T12:45:54.839507077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 16 12:45:54.840666 containerd[1496]: time="2025-12-16T12:45:54.840617375Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:54.847854 containerd[1496]: time="2025-12-16T12:45:54.847807415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:54.848569 containerd[1496]: time="2025-12-16T12:45:54.848536027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.700284762s" Dec 16 12:45:54.848621 containerd[1496]: time="2025-12-16T12:45:54.848574947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:45:54.849688 containerd[1496]: time="2025-12-16T12:45:54.849605965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:45:54.863807 containerd[1496]: time="2025-12-16T12:45:54.863764719Z" level=info msg="CreateContainer within sandbox \"fcdba9602576b920d1bd250036222353a0178bb915cdbc8a5e42e034dabd0a15\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:45:54.875692 containerd[1496]: time="2025-12-16T12:45:54.875624276Z" level=info msg="Container 60eeaf0343506ce463c363459c758709c3897f3051fb0e868913a95c2cc4f9f5: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:54.886543 containerd[1496]: time="2025-12-16T12:45:54.886499417Z" level=info msg="CreateContainer within sandbox \"fcdba9602576b920d1bd250036222353a0178bb915cdbc8a5e42e034dabd0a15\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"60eeaf0343506ce463c363459c758709c3897f3051fb0e868913a95c2cc4f9f5\"" Dec 16 12:45:54.887224 containerd[1496]: time="2025-12-16T12:45:54.887145868Z" level=info msg="StartContainer for \"60eeaf0343506ce463c363459c758709c3897f3051fb0e868913a95c2cc4f9f5\"" Dec 16 12:45:54.888418 containerd[1496]: time="2025-12-16T12:45:54.888377368Z" level=info msg="connecting to shim 60eeaf0343506ce463c363459c758709c3897f3051fb0e868913a95c2cc4f9f5" address="unix:///run/containerd/s/0d4df50eda633d4d312d4e79f9e62c1e4299bc9a38c68f2c2c8001d9ee11ac3a" protocol=ttrpc version=3 Dec 16 12:45:54.911674 systemd[1]: Started cri-containerd-60eeaf0343506ce463c363459c758709c3897f3051fb0e868913a95c2cc4f9f5.scope - libcontainer container 60eeaf0343506ce463c363459c758709c3897f3051fb0e868913a95c2cc4f9f5. Dec 16 12:45:54.955203 containerd[1496]: time="2025-12-16T12:45:54.955141836Z" level=info msg="StartContainer for \"60eeaf0343506ce463c363459c758709c3897f3051fb0e868913a95c2cc4f9f5\" returns successfully" Dec 16 12:45:55.036012 kubelet[2663]: E1216 12:45:55.035943 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:45:55.128556 kubelet[2663]: E1216 12:45:55.128321 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:55.159368 kubelet[2663]: I1216 12:45:55.159284 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74567d8bb6-6wnsz" podStartSLOduration=1.442768007 podStartE2EDuration="3.14946896s" podCreationTimestamp="2025-12-16 12:45:52 +0000 UTC" firstStartedPulling="2025-12-16 12:45:53.142623127 +0000 UTC m=+23.211814587" lastFinishedPulling="2025-12-16 12:45:54.84932404 +0000 UTC m=+24.918515540" observedRunningTime="2025-12-16 12:45:55.148107258 +0000 UTC m=+25.217298718" watchObservedRunningTime="2025-12-16 12:45:55.14946896 +0000 UTC m=+25.218660420" Dec 16 12:45:55.168800 kubelet[2663]: E1216 12:45:55.168657 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.168800 kubelet[2663]: W1216 12:45:55.168685 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.171472 kubelet[2663]: E1216 12:45:55.169832 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.178602 kubelet[2663]: E1216 12:45:55.178551 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.178602 kubelet[2663]: W1216 12:45:55.178592 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.178775 kubelet[2663]: E1216 12:45:55.178618 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.178943 kubelet[2663]: E1216 12:45:55.178913 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.178943 kubelet[2663]: W1216 12:45:55.178935 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.179002 kubelet[2663]: E1216 12:45:55.178948 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.180480 kubelet[2663]: E1216 12:45:55.179624 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.180480 kubelet[2663]: W1216 12:45:55.179644 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.180480 kubelet[2663]: E1216 12:45:55.179660 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.180480 kubelet[2663]: E1216 12:45:55.179885 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.180480 kubelet[2663]: W1216 12:45:55.179895 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.180480 kubelet[2663]: E1216 12:45:55.179911 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.180480 kubelet[2663]: E1216 12:45:55.180088 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.180480 kubelet[2663]: W1216 12:45:55.180109 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.180480 kubelet[2663]: E1216 12:45:55.180125 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.181839 kubelet[2663]: E1216 12:45:55.181808 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.181839 kubelet[2663]: W1216 12:45:55.181831 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.181974 kubelet[2663]: E1216 12:45:55.181847 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.182892 kubelet[2663]: E1216 12:45:55.182085 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.182892 kubelet[2663]: W1216 12:45:55.182110 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.182892 kubelet[2663]: E1216 12:45:55.182122 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.182892 kubelet[2663]: E1216 12:45:55.182388 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.182892 kubelet[2663]: W1216 12:45:55.182398 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.182892 kubelet[2663]: E1216 12:45:55.182409 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.182892 kubelet[2663]: E1216 12:45:55.182629 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.182892 kubelet[2663]: W1216 12:45:55.182638 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.182892 kubelet[2663]: E1216 12:45:55.182649 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.182892 kubelet[2663]: E1216 12:45:55.182798 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.183175 kubelet[2663]: W1216 12:45:55.182806 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.183175 kubelet[2663]: E1216 12:45:55.182814 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.184567 kubelet[2663]: E1216 12:45:55.184541 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.184567 kubelet[2663]: W1216 12:45:55.184559 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.184567 kubelet[2663]: E1216 12:45:55.184572 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.184813 kubelet[2663]: E1216 12:45:55.184792 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.184813 kubelet[2663]: W1216 12:45:55.184805 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.184872 kubelet[2663]: E1216 12:45:55.184815 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.185552 kubelet[2663]: E1216 12:45:55.185524 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.185552 kubelet[2663]: W1216 12:45:55.185540 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.185552 kubelet[2663]: E1216 12:45:55.185553 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.185932 kubelet[2663]: E1216 12:45:55.185906 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.185932 kubelet[2663]: W1216 12:45:55.185923 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.185932 kubelet[2663]: E1216 12:45:55.185936 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.196809 kubelet[2663]: E1216 12:45:55.196780 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.196809 kubelet[2663]: W1216 12:45:55.196804 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.196962 kubelet[2663]: E1216 12:45:55.196825 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.197126 kubelet[2663]: E1216 12:45:55.197107 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.197126 kubelet[2663]: W1216 12:45:55.197122 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.197180 kubelet[2663]: E1216 12:45:55.197132 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.197520 kubelet[2663]: E1216 12:45:55.197500 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.197520 kubelet[2663]: W1216 12:45:55.197515 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.197608 kubelet[2663]: E1216 12:45:55.197526 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.198934 kubelet[2663]: E1216 12:45:55.198910 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.198934 kubelet[2663]: W1216 12:45:55.198926 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.198934 kubelet[2663]: E1216 12:45:55.198940 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.199139 kubelet[2663]: E1216 12:45:55.199113 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.199139 kubelet[2663]: W1216 12:45:55.199122 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.199139 kubelet[2663]: E1216 12:45:55.199131 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.199313 kubelet[2663]: E1216 12:45:55.199300 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.199313 kubelet[2663]: W1216 12:45:55.199312 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.199382 kubelet[2663]: E1216 12:45:55.199322 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.199519 kubelet[2663]: E1216 12:45:55.199505 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.199519 kubelet[2663]: W1216 12:45:55.199517 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.199638 kubelet[2663]: E1216 12:45:55.199527 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.199702 kubelet[2663]: E1216 12:45:55.199688 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.199702 kubelet[2663]: W1216 12:45:55.199701 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.199759 kubelet[2663]: E1216 12:45:55.199711 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.200006 kubelet[2663]: E1216 12:45:55.199988 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.200006 kubelet[2663]: W1216 12:45:55.200002 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.200088 kubelet[2663]: E1216 12:45:55.200013 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.201569 kubelet[2663]: E1216 12:45:55.201547 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.201569 kubelet[2663]: W1216 12:45:55.201563 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.201569 kubelet[2663]: E1216 12:45:55.201576 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.202265 kubelet[2663]: E1216 12:45:55.201773 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.202265 kubelet[2663]: W1216 12:45:55.201781 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.202265 kubelet[2663]: E1216 12:45:55.201789 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.202265 kubelet[2663]: E1216 12:45:55.202043 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.202265 kubelet[2663]: W1216 12:45:55.202054 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.202265 kubelet[2663]: E1216 12:45:55.202064 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.202549 kubelet[2663]: E1216 12:45:55.202531 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.202549 kubelet[2663]: W1216 12:45:55.202546 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.203512 kubelet[2663]: E1216 12:45:55.202557 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.203512 kubelet[2663]: E1216 12:45:55.202728 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.203512 kubelet[2663]: W1216 12:45:55.202737 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.203512 kubelet[2663]: E1216 12:45:55.202747 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.203675 kubelet[2663]: E1216 12:45:55.203519 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.203675 kubelet[2663]: W1216 12:45:55.203532 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.203675 kubelet[2663]: E1216 12:45:55.203543 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.203791 kubelet[2663]: E1216 12:45:55.203764 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.203791 kubelet[2663]: W1216 12:45:55.203779 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.203791 kubelet[2663]: E1216 12:45:55.203789 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.203946 kubelet[2663]: E1216 12:45:55.203933 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.203946 kubelet[2663]: W1216 12:45:55.203944 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.203999 kubelet[2663]: E1216 12:45:55.203954 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.205550 kubelet[2663]: E1216 12:45:55.205526 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:45:55.205550 kubelet[2663]: W1216 12:45:55.205543 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:45:55.205550 kubelet[2663]: E1216 12:45:55.205556 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:45:55.870865 containerd[1496]: time="2025-12-16T12:45:55.870791119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:55.871628 containerd[1496]: time="2025-12-16T12:45:55.871601132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 16 12:45:55.872885 containerd[1496]: time="2025-12-16T12:45:55.872863832Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:55.874959 containerd[1496]: time="2025-12-16T12:45:55.874916344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:55.875601 containerd[1496]: time="2025-12-16T12:45:55.875574155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.02593475s" Dec 16 12:45:55.875640 containerd[1496]: time="2025-12-16T12:45:55.875606755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:45:55.880143 containerd[1496]: time="2025-12-16T12:45:55.880108707Z" level=info msg="CreateContainer within sandbox \"17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:45:55.889238 containerd[1496]: time="2025-12-16T12:45:55.888511921Z" level=info msg="Container 14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:55.897584 containerd[1496]: time="2025-12-16T12:45:55.897536824Z" level=info msg="CreateContainer within sandbox \"17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a\"" Dec 16 12:45:55.898262 containerd[1496]: time="2025-12-16T12:45:55.898220635Z" level=info msg="StartContainer for \"14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a\"" Dec 16 12:45:55.900080 containerd[1496]: time="2025-12-16T12:45:55.900049224Z" level=info msg="connecting to shim 14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a" address="unix:///run/containerd/s/35c0cc4f1289fcb07c3ea5d148996fdfa0aa810640e148b8247c2fb20a6e179d" protocol=ttrpc version=3 Dec 16 12:45:55.922676 systemd[1]: Started cri-containerd-14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a.scope - libcontainer container 14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a. Dec 16 12:45:55.989696 containerd[1496]: time="2025-12-16T12:45:55.989654770Z" level=info msg="StartContainer for \"14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a\" returns successfully" Dec 16 12:45:56.007400 systemd[1]: cri-containerd-14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a.scope: Deactivated successfully. Dec 16 12:45:56.035155 containerd[1496]: time="2025-12-16T12:45:56.035095032Z" level=info msg="received container exit event container_id:\"14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a\" id:\"14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a\" pid:3382 exited_at:{seconds:1765889156 nanos:21239700}" Dec 16 12:45:56.092404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14f2ea2e2603effab5b0ce9f9cd34da931e69a18f7495cfb29a05c1be835331a-rootfs.mount: Deactivated successfully. Dec 16 12:45:56.133055 kubelet[2663]: I1216 12:45:56.132901 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:45:56.133831 kubelet[2663]: E1216 12:45:56.133204 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:56.133831 kubelet[2663]: E1216 12:45:56.133234 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:57.036250 kubelet[2663]: E1216 12:45:57.036199 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:45:57.139959 kubelet[2663]: E1216 12:45:57.139320 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:57.143134 containerd[1496]: time="2025-12-16T12:45:57.143082827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:45:58.847076 containerd[1496]: time="2025-12-16T12:45:58.847025949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:58.848315 containerd[1496]: time="2025-12-16T12:45:58.848080524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 16 12:45:58.852832 containerd[1496]: time="2025-12-16T12:45:58.852426546Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:58.855554 containerd[1496]: time="2025-12-16T12:45:58.855462828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:58.856410 containerd[1496]: time="2025-12-16T12:45:58.855948435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 1.712810287s" Dec 16 12:45:58.856410 containerd[1496]: time="2025-12-16T12:45:58.855986756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:45:58.867415 containerd[1496]: time="2025-12-16T12:45:58.867372156Z" level=info msg="CreateContainer within sandbox \"17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:45:58.877338 containerd[1496]: time="2025-12-16T12:45:58.877281216Z" level=info msg="Container 380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:58.891821 containerd[1496]: time="2025-12-16T12:45:58.891745180Z" level=info msg="CreateContainer within sandbox \"17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a\"" Dec 16 12:45:58.892727 containerd[1496]: time="2025-12-16T12:45:58.892511151Z" level=info msg="StartContainer for \"380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a\"" Dec 16 12:45:58.894397 containerd[1496]: time="2025-12-16T12:45:58.894366697Z" level=info msg="connecting to shim 380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a" address="unix:///run/containerd/s/35c0cc4f1289fcb07c3ea5d148996fdfa0aa810640e148b8247c2fb20a6e179d" protocol=ttrpc version=3 Dec 16 12:45:58.916637 systemd[1]: Started cri-containerd-380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a.scope - libcontainer container 380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a. Dec 16 12:45:58.992341 containerd[1496]: time="2025-12-16T12:45:58.992184477Z" level=info msg="StartContainer for \"380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a\" returns successfully" Dec 16 12:45:59.035868 kubelet[2663]: E1216 12:45:59.035793 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:45:59.150087 kubelet[2663]: E1216 12:45:59.149617 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:59.545520 systemd[1]: cri-containerd-380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a.scope: Deactivated successfully. Dec 16 12:45:59.546640 systemd[1]: cri-containerd-380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a.scope: Consumed 512ms CPU time, 177.3M memory peak, 2.2M read from disk, 165.9M written to disk. Dec 16 12:45:59.548334 containerd[1496]: time="2025-12-16T12:45:59.548275473Z" level=info msg="received container exit event container_id:\"380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a\" id:\"380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a\" pid:3441 exited_at:{seconds:1765889159 nanos:548041430}" Dec 16 12:45:59.570773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-380452e847f949bb6923539d99e47c5d1625164f2dcae47b20c4e4adb7cb0c1a-rootfs.mount: Deactivated successfully. Dec 16 12:45:59.605568 kubelet[2663]: I1216 12:45:59.605517 2663 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:45:59.676889 systemd[1]: Created slice kubepods-burstable-pod1a4a4a45_5dac_4732_b17c_5369fab2f52d.slice - libcontainer container kubepods-burstable-pod1a4a4a45_5dac_4732_b17c_5369fab2f52d.slice. Dec 16 12:45:59.703186 systemd[1]: Created slice kubepods-besteffort-pod88af0c32_5f2c_41d6_ac54_1c0ec2b9ceb9.slice - libcontainer container kubepods-besteffort-pod88af0c32_5f2c_41d6_ac54_1c0ec2b9ceb9.slice. Dec 16 12:45:59.709718 systemd[1]: Created slice kubepods-burstable-podfd592319_f307_4a63_b9da_7593332cc589.slice - libcontainer container kubepods-burstable-podfd592319_f307_4a63_b9da_7593332cc589.slice. Dec 16 12:45:59.716878 systemd[1]: Created slice kubepods-besteffort-pod1f1fea87_d58f_4ba7_813c_87eb72bdb004.slice - libcontainer container kubepods-besteffort-pod1f1fea87_d58f_4ba7_813c_87eb72bdb004.slice. Dec 16 12:45:59.722419 systemd[1]: Created slice kubepods-besteffort-podb7cd9aef_f007_4e21_92a2_b7da7e34e076.slice - libcontainer container kubepods-besteffort-podb7cd9aef_f007_4e21_92a2_b7da7e34e076.slice. Dec 16 12:45:59.728929 systemd[1]: Created slice kubepods-besteffort-podfb33e682_5dfd_47f9_8045_4205f896ddba.slice - libcontainer container kubepods-besteffort-podfb33e682_5dfd_47f9_8045_4205f896ddba.slice. Dec 16 12:45:59.730822 kubelet[2663]: I1216 12:45:59.730770 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djxt\" (UniqueName: \"kubernetes.io/projected/88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9-kube-api-access-5djxt\") pod \"calico-kube-controllers-76c76958cc-4g9pn\" (UID: \"88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9\") " pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" Dec 16 12:45:59.730909 kubelet[2663]: I1216 12:45:59.730829 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd592319-f307-4a63-b9da-7593332cc589-config-volume\") pod \"coredns-674b8bbfcf-n7ggc\" (UID: \"fd592319-f307-4a63-b9da-7593332cc589\") " pod="kube-system/coredns-674b8bbfcf-n7ggc" Dec 16 12:45:59.730909 kubelet[2663]: I1216 12:45:59.730857 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a4a4a45-5dac-4732-b17c-5369fab2f52d-config-volume\") pod \"coredns-674b8bbfcf-nx5dk\" (UID: \"1a4a4a45-5dac-4732-b17c-5369fab2f52d\") " pod="kube-system/coredns-674b8bbfcf-nx5dk" Dec 16 12:45:59.730909 kubelet[2663]: I1216 12:45:59.730887 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b03df5e-c87f-4925-bce5-1bc694fc45a1-calico-apiserver-certs\") pod \"calico-apiserver-7ddfc8d545-95q7h\" (UID: \"4b03df5e-c87f-4925-bce5-1bc694fc45a1\") " pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" Dec 16 12:45:59.731020 kubelet[2663]: I1216 12:45:59.731000 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgwdl\" (UniqueName: \"kubernetes.io/projected/b7cd9aef-f007-4e21-92a2-b7da7e34e076-kube-api-access-tgwdl\") pod \"calico-apiserver-7ddfc8d545-2gnqr\" (UID: \"b7cd9aef-f007-4e21-92a2-b7da7e34e076\") " pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" Dec 16 12:45:59.731207 kubelet[2663]: I1216 12:45:59.731187 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7cd9aef-f007-4e21-92a2-b7da7e34e076-calico-apiserver-certs\") pod \"calico-apiserver-7ddfc8d545-2gnqr\" (UID: \"b7cd9aef-f007-4e21-92a2-b7da7e34e076\") " pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" Dec 16 12:45:59.731492 kubelet[2663]: I1216 12:45:59.731430 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngtx9\" (UniqueName: \"kubernetes.io/projected/fb33e682-5dfd-47f9-8045-4205f896ddba-kube-api-access-ngtx9\") pod \"whisker-78694c998c-q4mnd\" (UID: \"fb33e682-5dfd-47f9-8045-4205f896ddba\") " pod="calico-system/whisker-78694c998c-q4mnd" Dec 16 12:45:59.731793 kubelet[2663]: I1216 12:45:59.731771 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f1fea87-d58f-4ba7-813c-87eb72bdb004-config\") pod \"goldmane-666569f655-x98rg\" (UID: \"1f1fea87-d58f-4ba7-813c-87eb72bdb004\") " pod="calico-system/goldmane-666569f655-x98rg" Dec 16 12:45:59.732380 kubelet[2663]: I1216 12:45:59.732197 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6qtf\" (UniqueName: \"kubernetes.io/projected/1f1fea87-d58f-4ba7-813c-87eb72bdb004-kube-api-access-j6qtf\") pod \"goldmane-666569f655-x98rg\" (UID: \"1f1fea87-d58f-4ba7-813c-87eb72bdb004\") " pod="calico-system/goldmane-666569f655-x98rg" Dec 16 12:45:59.732508 kubelet[2663]: I1216 12:45:59.732434 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6hqb\" (UniqueName: \"kubernetes.io/projected/4b03df5e-c87f-4925-bce5-1bc694fc45a1-kube-api-access-x6hqb\") pod \"calico-apiserver-7ddfc8d545-95q7h\" (UID: \"4b03df5e-c87f-4925-bce5-1bc694fc45a1\") " pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" Dec 16 12:45:59.732573 kubelet[2663]: I1216 12:45:59.732529 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f1fea87-d58f-4ba7-813c-87eb72bdb004-goldmane-ca-bundle\") pod \"goldmane-666569f655-x98rg\" (UID: \"1f1fea87-d58f-4ba7-813c-87eb72bdb004\") " pod="calico-system/goldmane-666569f655-x98rg" Dec 16 12:45:59.733153 kubelet[2663]: I1216 12:45:59.733113 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r88bc\" (UniqueName: \"kubernetes.io/projected/1a4a4a45-5dac-4732-b17c-5369fab2f52d-kube-api-access-r88bc\") pod \"coredns-674b8bbfcf-nx5dk\" (UID: \"1a4a4a45-5dac-4732-b17c-5369fab2f52d\") " pod="kube-system/coredns-674b8bbfcf-nx5dk" Dec 16 12:45:59.733338 kubelet[2663]: I1216 12:45:59.733306 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1f1fea87-d58f-4ba7-813c-87eb72bdb004-goldmane-key-pair\") pod \"goldmane-666569f655-x98rg\" (UID: \"1f1fea87-d58f-4ba7-813c-87eb72bdb004\") " pod="calico-system/goldmane-666569f655-x98rg" Dec 16 12:45:59.733529 kubelet[2663]: I1216 12:45:59.733427 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9-tigera-ca-bundle\") pod \"calico-kube-controllers-76c76958cc-4g9pn\" (UID: \"88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9\") " pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" Dec 16 12:45:59.733674 kubelet[2663]: I1216 12:45:59.733597 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fb33e682-5dfd-47f9-8045-4205f896ddba-whisker-backend-key-pair\") pod \"whisker-78694c998c-q4mnd\" (UID: \"fb33e682-5dfd-47f9-8045-4205f896ddba\") " pod="calico-system/whisker-78694c998c-q4mnd" Dec 16 12:45:59.733761 kubelet[2663]: I1216 12:45:59.733733 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8c9\" (UniqueName: \"kubernetes.io/projected/fd592319-f307-4a63-b9da-7593332cc589-kube-api-access-sb8c9\") pod \"coredns-674b8bbfcf-n7ggc\" (UID: \"fd592319-f307-4a63-b9da-7593332cc589\") " pod="kube-system/coredns-674b8bbfcf-n7ggc" Dec 16 12:45:59.733808 kubelet[2663]: I1216 12:45:59.733794 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb33e682-5dfd-47f9-8045-4205f896ddba-whisker-ca-bundle\") pod \"whisker-78694c998c-q4mnd\" (UID: \"fb33e682-5dfd-47f9-8045-4205f896ddba\") " pod="calico-system/whisker-78694c998c-q4mnd" Dec 16 12:45:59.739462 systemd[1]: Created slice kubepods-besteffort-pod4b03df5e_c87f_4925_bce5_1bc694fc45a1.slice - libcontainer container kubepods-besteffort-pod4b03df5e_c87f_4925_bce5_1bc694fc45a1.slice. Dec 16 12:45:59.998714 kubelet[2663]: E1216 12:45:59.998679 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:45:59.999256 containerd[1496]: time="2025-12-16T12:45:59.999201197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nx5dk,Uid:1a4a4a45-5dac-4732-b17c-5369fab2f52d,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:00.007800 containerd[1496]: time="2025-12-16T12:46:00.007662908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76c76958cc-4g9pn,Uid:88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:00.013926 kubelet[2663]: E1216 12:46:00.013895 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:00.017348 containerd[1496]: time="2025-12-16T12:46:00.017244793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n7ggc,Uid:fd592319-f307-4a63-b9da-7593332cc589,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:00.021994 containerd[1496]: time="2025-12-16T12:46:00.020309273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x98rg,Uid:1f1fea87-d58f-4ba7-813c-87eb72bdb004,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:00.026617 containerd[1496]: time="2025-12-16T12:46:00.026538555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddfc8d545-2gnqr,Uid:b7cd9aef-f007-4e21-92a2-b7da7e34e076,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:00.037660 containerd[1496]: time="2025-12-16T12:46:00.037603340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78694c998c-q4mnd,Uid:fb33e682-5dfd-47f9-8045-4205f896ddba,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:00.042798 containerd[1496]: time="2025-12-16T12:46:00.042737567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddfc8d545-95q7h,Uid:4b03df5e-c87f-4925-bce5-1bc694fc45a1,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:00.141850 containerd[1496]: time="2025-12-16T12:46:00.141761743Z" level=error msg="Failed to destroy network for sandbox \"c887afd1007ddafde73f5473c1df9f57f771c0078e3707fa3d4e440c9e7b72bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.143318 containerd[1496]: time="2025-12-16T12:46:00.143214602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x98rg,Uid:1f1fea87-d58f-4ba7-813c-87eb72bdb004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c887afd1007ddafde73f5473c1df9f57f771c0078e3707fa3d4e440c9e7b72bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.145188 containerd[1496]: time="2025-12-16T12:46:00.143788769Z" level=error msg="Failed to destroy network for sandbox \"296275c0955075f2eab8500ddaa04d3a92ba53816293a8b48ac711d384f298b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.145188 containerd[1496]: time="2025-12-16T12:46:00.145092586Z" level=error msg="Failed to destroy network for sandbox \"af9fc42c2836e24bd0a3cace7b122ccef2d0ec115ec35ad9d5e1b5fd79eee53b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.146785 kubelet[2663]: E1216 12:46:00.146731 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c887afd1007ddafde73f5473c1df9f57f771c0078e3707fa3d4e440c9e7b72bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.147419 kubelet[2663]: E1216 12:46:00.146823 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c887afd1007ddafde73f5473c1df9f57f771c0078e3707fa3d4e440c9e7b72bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x98rg" Dec 16 12:46:00.147419 kubelet[2663]: E1216 12:46:00.146843 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c887afd1007ddafde73f5473c1df9f57f771c0078e3707fa3d4e440c9e7b72bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-x98rg" Dec 16 12:46:00.147419 kubelet[2663]: E1216 12:46:00.146907 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-x98rg_calico-system(1f1fea87-d58f-4ba7-813c-87eb72bdb004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-x98rg_calico-system(1f1fea87-d58f-4ba7-813c-87eb72bdb004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c887afd1007ddafde73f5473c1df9f57f771c0078e3707fa3d4e440c9e7b72bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-x98rg" podUID="1f1fea87-d58f-4ba7-813c-87eb72bdb004" Dec 16 12:46:00.149157 containerd[1496]: time="2025-12-16T12:46:00.149100319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76c76958cc-4g9pn,Uid:88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"296275c0955075f2eab8500ddaa04d3a92ba53816293a8b48ac711d384f298b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.149488 kubelet[2663]: E1216 12:46:00.149424 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"296275c0955075f2eab8500ddaa04d3a92ba53816293a8b48ac711d384f298b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.149488 kubelet[2663]: E1216 12:46:00.149481 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"296275c0955075f2eab8500ddaa04d3a92ba53816293a8b48ac711d384f298b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" Dec 16 12:46:00.149592 kubelet[2663]: E1216 12:46:00.149500 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"296275c0955075f2eab8500ddaa04d3a92ba53816293a8b48ac711d384f298b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" Dec 16 12:46:00.149592 kubelet[2663]: E1216 12:46:00.149539 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76c76958cc-4g9pn_calico-system(88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76c76958cc-4g9pn_calico-system(88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"296275c0955075f2eab8500ddaa04d3a92ba53816293a8b48ac711d384f298b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" podUID="88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9" Dec 16 12:46:00.150287 containerd[1496]: time="2025-12-16T12:46:00.150245854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nx5dk,Uid:1a4a4a45-5dac-4732-b17c-5369fab2f52d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9fc42c2836e24bd0a3cace7b122ccef2d0ec115ec35ad9d5e1b5fd79eee53b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.150487 kubelet[2663]: E1216 12:46:00.150439 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9fc42c2836e24bd0a3cace7b122ccef2d0ec115ec35ad9d5e1b5fd79eee53b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.150527 kubelet[2663]: E1216 12:46:00.150491 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9fc42c2836e24bd0a3cace7b122ccef2d0ec115ec35ad9d5e1b5fd79eee53b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nx5dk" Dec 16 12:46:00.150527 kubelet[2663]: E1216 12:46:00.150509 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9fc42c2836e24bd0a3cace7b122ccef2d0ec115ec35ad9d5e1b5fd79eee53b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nx5dk" Dec 16 12:46:00.150801 kubelet[2663]: E1216 12:46:00.150543 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nx5dk_kube-system(1a4a4a45-5dac-4732-b17c-5369fab2f52d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nx5dk_kube-system(1a4a4a45-5dac-4732-b17c-5369fab2f52d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af9fc42c2836e24bd0a3cace7b122ccef2d0ec115ec35ad9d5e1b5fd79eee53b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nx5dk" podUID="1a4a4a45-5dac-4732-b17c-5369fab2f52d" Dec 16 12:46:00.151296 containerd[1496]: time="2025-12-16T12:46:00.151266507Z" level=error msg="Failed to destroy network for sandbox \"d8f7fdc1ff311bd201994cb77a7bab3024860bd697153abb6d8d9367f8e5750c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.161227 containerd[1496]: time="2025-12-16T12:46:00.161127756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n7ggc,Uid:fd592319-f307-4a63-b9da-7593332cc589,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f7fdc1ff311bd201994cb77a7bab3024860bd697153abb6d8d9367f8e5750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.161436 kubelet[2663]: E1216 12:46:00.161366 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f7fdc1ff311bd201994cb77a7bab3024860bd697153abb6d8d9367f8e5750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.161486 kubelet[2663]: E1216 12:46:00.161461 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f7fdc1ff311bd201994cb77a7bab3024860bd697153abb6d8d9367f8e5750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n7ggc" Dec 16 12:46:00.161515 kubelet[2663]: E1216 12:46:00.161482 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f7fdc1ff311bd201994cb77a7bab3024860bd697153abb6d8d9367f8e5750c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n7ggc" Dec 16 12:46:00.161553 kubelet[2663]: E1216 12:46:00.161528 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n7ggc_kube-system(fd592319-f307-4a63-b9da-7593332cc589)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n7ggc_kube-system(fd592319-f307-4a63-b9da-7593332cc589)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8f7fdc1ff311bd201994cb77a7bab3024860bd697153abb6d8d9367f8e5750c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n7ggc" podUID="fd592319-f307-4a63-b9da-7593332cc589" Dec 16 12:46:00.165017 containerd[1496]: time="2025-12-16T12:46:00.164977766Z" level=error msg="Failed to destroy network for sandbox \"10c89ad286c7749b1a68b8ad7fa93d8a373ab6352b7b410ce01144bad10bc409\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.167154 kubelet[2663]: E1216 12:46:00.167113 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:00.167847 containerd[1496]: time="2025-12-16T12:46:00.167799083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78694c998c-q4mnd,Uid:fb33e682-5dfd-47f9-8045-4205f896ddba,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c89ad286c7749b1a68b8ad7fa93d8a373ab6352b7b410ce01144bad10bc409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.167989 kubelet[2663]: E1216 12:46:00.167967 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c89ad286c7749b1a68b8ad7fa93d8a373ab6352b7b410ce01144bad10bc409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.168138 kubelet[2663]: E1216 12:46:00.168000 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c89ad286c7749b1a68b8ad7fa93d8a373ab6352b7b410ce01144bad10bc409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78694c998c-q4mnd" Dec 16 12:46:00.168138 kubelet[2663]: E1216 12:46:00.168018 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c89ad286c7749b1a68b8ad7fa93d8a373ab6352b7b410ce01144bad10bc409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78694c998c-q4mnd" Dec 16 12:46:00.168138 kubelet[2663]: E1216 12:46:00.168050 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78694c998c-q4mnd_calico-system(fb33e682-5dfd-47f9-8045-4205f896ddba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78694c998c-q4mnd_calico-system(fb33e682-5dfd-47f9-8045-4205f896ddba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10c89ad286c7749b1a68b8ad7fa93d8a373ab6352b7b410ce01144bad10bc409\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78694c998c-q4mnd" podUID="fb33e682-5dfd-47f9-8045-4205f896ddba" Dec 16 12:46:00.168287 containerd[1496]: time="2025-12-16T12:46:00.168267809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:46:00.181414 containerd[1496]: time="2025-12-16T12:46:00.181332900Z" level=error msg="Failed to destroy network for sandbox \"1704244d8b76afddc9b0f59dc99369becf3d70329ad4486b0a544cb41c679aa8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.183202 containerd[1496]: time="2025-12-16T12:46:00.183135764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddfc8d545-2gnqr,Uid:b7cd9aef-f007-4e21-92a2-b7da7e34e076,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1704244d8b76afddc9b0f59dc99369becf3d70329ad4486b0a544cb41c679aa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.183549 kubelet[2663]: E1216 12:46:00.183419 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1704244d8b76afddc9b0f59dc99369becf3d70329ad4486b0a544cb41c679aa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.183617 kubelet[2663]: E1216 12:46:00.183571 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1704244d8b76afddc9b0f59dc99369becf3d70329ad4486b0a544cb41c679aa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" Dec 16 12:46:00.183617 kubelet[2663]: E1216 12:46:00.183592 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1704244d8b76afddc9b0f59dc99369becf3d70329ad4486b0a544cb41c679aa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" Dec 16 12:46:00.183786 kubelet[2663]: E1216 12:46:00.183667 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ddfc8d545-2gnqr_calico-apiserver(b7cd9aef-f007-4e21-92a2-b7da7e34e076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ddfc8d545-2gnqr_calico-apiserver(b7cd9aef-f007-4e21-92a2-b7da7e34e076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1704244d8b76afddc9b0f59dc99369becf3d70329ad4486b0a544cb41c679aa8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" podUID="b7cd9aef-f007-4e21-92a2-b7da7e34e076" Dec 16 12:46:00.185761 containerd[1496]: time="2025-12-16T12:46:00.185718918Z" level=error msg="Failed to destroy network for sandbox \"62845c1456060fd69a55c6550efd6140b4db4129e2f03bcd41824a9dcffad592\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.188314 containerd[1496]: time="2025-12-16T12:46:00.188164110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddfc8d545-95q7h,Uid:4b03df5e-c87f-4925-bce5-1bc694fc45a1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62845c1456060fd69a55c6550efd6140b4db4129e2f03bcd41824a9dcffad592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.188672 kubelet[2663]: E1216 12:46:00.188614 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62845c1456060fd69a55c6550efd6140b4db4129e2f03bcd41824a9dcffad592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:00.188753 kubelet[2663]: E1216 12:46:00.188694 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62845c1456060fd69a55c6550efd6140b4db4129e2f03bcd41824a9dcffad592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" Dec 16 12:46:00.188753 kubelet[2663]: E1216 12:46:00.188715 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62845c1456060fd69a55c6550efd6140b4db4129e2f03bcd41824a9dcffad592\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" Dec 16 12:46:00.188828 kubelet[2663]: E1216 12:46:00.188760 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ddfc8d545-95q7h_calico-apiserver(4b03df5e-c87f-4925-bce5-1bc694fc45a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ddfc8d545-95q7h_calico-apiserver(4b03df5e-c87f-4925-bce5-1bc694fc45a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62845c1456060fd69a55c6550efd6140b4db4129e2f03bcd41824a9dcffad592\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" podUID="4b03df5e-c87f-4925-bce5-1bc694fc45a1" Dec 16 12:46:00.876605 systemd[1]: run-netns-cni\x2d02cbc597\x2d344c\x2d65ec\x2d44d4\x2dae4886b2ffa4.mount: Deactivated successfully. Dec 16 12:46:00.876690 systemd[1]: run-netns-cni\x2d7c598e9c\x2d1fef\x2d877c\x2d3d05\x2d99067cc9905a.mount: Deactivated successfully. Dec 16 12:46:00.876733 systemd[1]: run-netns-cni\x2d34271422\x2dfdbf\x2dad3a\x2d66bd\x2dc0ed2505aa76.mount: Deactivated successfully. Dec 16 12:46:00.876773 systemd[1]: run-netns-cni\x2d7bf62974\x2d3275\x2d9258\x2d48ed\x2dcde6087fdf28.mount: Deactivated successfully. Dec 16 12:46:00.959613 kubelet[2663]: I1216 12:46:00.959585 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:46:00.960264 kubelet[2663]: E1216 12:46:00.960245 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:01.058407 systemd[1]: Created slice kubepods-besteffort-podbd9dfc7d_59c3_4082_b547_c4b54eeb1dee.slice - libcontainer container kubepods-besteffort-podbd9dfc7d_59c3_4082_b547_c4b54eeb1dee.slice. Dec 16 12:46:01.061541 containerd[1496]: time="2025-12-16T12:46:01.061470308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-575rn,Uid:bd9dfc7d-59c3-4082-b547-c4b54eeb1dee,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:01.124979 containerd[1496]: time="2025-12-16T12:46:01.124926869Z" level=error msg="Failed to destroy network for sandbox \"6414e4bdd3f8b6721fbca193d4789b99f879bfa4bc5a74fa0ca3b9a39d00caa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:01.126733 containerd[1496]: time="2025-12-16T12:46:01.126693051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-575rn,Uid:bd9dfc7d-59c3-4082-b547-c4b54eeb1dee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6414e4bdd3f8b6721fbca193d4789b99f879bfa4bc5a74fa0ca3b9a39d00caa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:01.126911 systemd[1]: run-netns-cni\x2db069b4b9\x2dd6e5\x2df734\x2dc90a\x2d62cbe46f9bf9.mount: Deactivated successfully. Dec 16 12:46:01.128878 kubelet[2663]: E1216 12:46:01.126918 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6414e4bdd3f8b6721fbca193d4789b99f879bfa4bc5a74fa0ca3b9a39d00caa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:01.128878 kubelet[2663]: E1216 12:46:01.126974 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6414e4bdd3f8b6721fbca193d4789b99f879bfa4bc5a74fa0ca3b9a39d00caa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-575rn" Dec 16 12:46:01.128878 kubelet[2663]: E1216 12:46:01.126997 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6414e4bdd3f8b6721fbca193d4789b99f879bfa4bc5a74fa0ca3b9a39d00caa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-575rn" Dec 16 12:46:01.129120 kubelet[2663]: E1216 12:46:01.127048 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-575rn_calico-system(bd9dfc7d-59c3-4082-b547-c4b54eeb1dee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-575rn_calico-system(bd9dfc7d-59c3-4082-b547-c4b54eeb1dee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6414e4bdd3f8b6721fbca193d4789b99f879bfa4bc5a74fa0ca3b9a39d00caa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:46:01.169791 kubelet[2663]: E1216 12:46:01.169739 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:03.122392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161495879.mount: Deactivated successfully. Dec 16 12:46:03.536331 containerd[1496]: time="2025-12-16T12:46:03.536007171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 16 12:46:03.536331 containerd[1496]: time="2025-12-16T12:46:03.536114732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:03.550632 containerd[1496]: time="2025-12-16T12:46:03.550401580Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:03.552925 containerd[1496]: time="2025-12-16T12:46:03.552866809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.384567759s" Dec 16 12:46:03.552925 containerd[1496]: time="2025-12-16T12:46:03.552903970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:46:03.560793 containerd[1496]: time="2025-12-16T12:46:03.560621941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:03.574813 containerd[1496]: time="2025-12-16T12:46:03.574769547Z" level=info msg="CreateContainer within sandbox \"17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:46:03.585662 containerd[1496]: time="2025-12-16T12:46:03.585611635Z" level=info msg="Container e57c3c09baddc9359ecedb5a51ebd5aee005611ef711b15b5942ec311c962b64: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:03.598463 containerd[1496]: time="2025-12-16T12:46:03.598392145Z" level=info msg="CreateContainer within sandbox \"17f93f753e9bfec095ac562a9565f1639e3574ce25611df959274d0159204e29\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e57c3c09baddc9359ecedb5a51ebd5aee005611ef711b15b5942ec311c962b64\"" Dec 16 12:46:03.599224 containerd[1496]: time="2025-12-16T12:46:03.599193795Z" level=info msg="StartContainer for \"e57c3c09baddc9359ecedb5a51ebd5aee005611ef711b15b5942ec311c962b64\"" Dec 16 12:46:03.601028 containerd[1496]: time="2025-12-16T12:46:03.600997376Z" level=info msg="connecting to shim e57c3c09baddc9359ecedb5a51ebd5aee005611ef711b15b5942ec311c962b64" address="unix:///run/containerd/s/35c0cc4f1289fcb07c3ea5d148996fdfa0aa810640e148b8247c2fb20a6e179d" protocol=ttrpc version=3 Dec 16 12:46:03.652669 systemd[1]: Started cri-containerd-e57c3c09baddc9359ecedb5a51ebd5aee005611ef711b15b5942ec311c962b64.scope - libcontainer container e57c3c09baddc9359ecedb5a51ebd5aee005611ef711b15b5942ec311c962b64. Dec 16 12:46:03.736294 containerd[1496]: time="2025-12-16T12:46:03.736222049Z" level=info msg="StartContainer for \"e57c3c09baddc9359ecedb5a51ebd5aee005611ef711b15b5942ec311c962b64\" returns successfully" Dec 16 12:46:03.863262 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:46:03.863402 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:46:04.167178 kubelet[2663]: I1216 12:46:04.167123 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngtx9\" (UniqueName: \"kubernetes.io/projected/fb33e682-5dfd-47f9-8045-4205f896ddba-kube-api-access-ngtx9\") pod \"fb33e682-5dfd-47f9-8045-4205f896ddba\" (UID: \"fb33e682-5dfd-47f9-8045-4205f896ddba\") " Dec 16 12:46:04.167577 kubelet[2663]: I1216 12:46:04.167198 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fb33e682-5dfd-47f9-8045-4205f896ddba-whisker-backend-key-pair\") pod \"fb33e682-5dfd-47f9-8045-4205f896ddba\" (UID: \"fb33e682-5dfd-47f9-8045-4205f896ddba\") " Dec 16 12:46:04.167577 kubelet[2663]: I1216 12:46:04.167229 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb33e682-5dfd-47f9-8045-4205f896ddba-whisker-ca-bundle\") pod \"fb33e682-5dfd-47f9-8045-4205f896ddba\" (UID: \"fb33e682-5dfd-47f9-8045-4205f896ddba\") " Dec 16 12:46:04.175096 kubelet[2663]: I1216 12:46:04.175045 2663 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb33e682-5dfd-47f9-8045-4205f896ddba-kube-api-access-ngtx9" (OuterVolumeSpecName: "kube-api-access-ngtx9") pod "fb33e682-5dfd-47f9-8045-4205f896ddba" (UID: "fb33e682-5dfd-47f9-8045-4205f896ddba"). InnerVolumeSpecName "kube-api-access-ngtx9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:46:04.175813 kubelet[2663]: I1216 12:46:04.175778 2663 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb33e682-5dfd-47f9-8045-4205f896ddba-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fb33e682-5dfd-47f9-8045-4205f896ddba" (UID: "fb33e682-5dfd-47f9-8045-4205f896ddba"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:46:04.176186 systemd[1]: var-lib-kubelet-pods-fb33e682\x2d5dfd\x2d47f9\x2d8045\x2d4205f896ddba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngtx9.mount: Deactivated successfully. Dec 16 12:46:04.176521 systemd[1]: var-lib-kubelet-pods-fb33e682\x2d5dfd\x2d47f9\x2d8045\x2d4205f896ddba-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:46:04.184199 kubelet[2663]: I1216 12:46:04.183427 2663 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb33e682-5dfd-47f9-8045-4205f896ddba-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fb33e682-5dfd-47f9-8045-4205f896ddba" (UID: "fb33e682-5dfd-47f9-8045-4205f896ddba"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:46:04.184199 kubelet[2663]: E1216 12:46:04.183992 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:04.205506 kubelet[2663]: I1216 12:46:04.204743 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-86qsz" podStartSLOduration=1.90007905 podStartE2EDuration="12.204727328s" podCreationTimestamp="2025-12-16 12:45:52 +0000 UTC" firstStartedPulling="2025-12-16 12:45:53.250782161 +0000 UTC m=+23.319973581" lastFinishedPulling="2025-12-16 12:46:03.555430399 +0000 UTC m=+33.624621859" observedRunningTime="2025-12-16 12:46:04.204580326 +0000 UTC m=+34.273771826" watchObservedRunningTime="2025-12-16 12:46:04.204727328 +0000 UTC m=+34.273918788" Dec 16 12:46:04.269279 kubelet[2663]: I1216 12:46:04.267910 2663 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fb33e682-5dfd-47f9-8045-4205f896ddba-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 12:46:04.269279 kubelet[2663]: I1216 12:46:04.267951 2663 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb33e682-5dfd-47f9-8045-4205f896ddba-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 12:46:04.269279 kubelet[2663]: I1216 12:46:04.267962 2663 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngtx9\" (UniqueName: \"kubernetes.io/projected/fb33e682-5dfd-47f9-8045-4205f896ddba-kube-api-access-ngtx9\") on node \"localhost\" DevicePath \"\"" Dec 16 12:46:04.489403 systemd[1]: Removed slice kubepods-besteffort-podfb33e682_5dfd_47f9_8045_4205f896ddba.slice - libcontainer container kubepods-besteffort-podfb33e682_5dfd_47f9_8045_4205f896ddba.slice. Dec 16 12:46:04.549677 systemd[1]: Created slice kubepods-besteffort-podd3573843_84a0_4e96_b493_87073cbb0cd2.slice - libcontainer container kubepods-besteffort-podd3573843_84a0_4e96_b493_87073cbb0cd2.slice. Dec 16 12:46:04.570318 kubelet[2663]: I1216 12:46:04.570278 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3573843-84a0-4e96-b493-87073cbb0cd2-whisker-backend-key-pair\") pod \"whisker-955ff858c-k74cv\" (UID: \"d3573843-84a0-4e96-b493-87073cbb0cd2\") " pod="calico-system/whisker-955ff858c-k74cv" Dec 16 12:46:04.570481 kubelet[2663]: I1216 12:46:04.570334 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3573843-84a0-4e96-b493-87073cbb0cd2-whisker-ca-bundle\") pod \"whisker-955ff858c-k74cv\" (UID: \"d3573843-84a0-4e96-b493-87073cbb0cd2\") " pod="calico-system/whisker-955ff858c-k74cv" Dec 16 12:46:04.570481 kubelet[2663]: I1216 12:46:04.570359 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlzs8\" (UniqueName: \"kubernetes.io/projected/d3573843-84a0-4e96-b493-87073cbb0cd2-kube-api-access-dlzs8\") pod \"whisker-955ff858c-k74cv\" (UID: \"d3573843-84a0-4e96-b493-87073cbb0cd2\") " pod="calico-system/whisker-955ff858c-k74cv" Dec 16 12:46:04.855228 containerd[1496]: time="2025-12-16T12:46:04.855120339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955ff858c-k74cv,Uid:d3573843-84a0-4e96-b493-87073cbb0cd2,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:05.022523 systemd-networkd[1432]: cali7475e021719: Link UP Dec 16 12:46:05.022729 systemd-networkd[1432]: cali7475e021719: Gained carrier Dec 16 12:46:05.035985 containerd[1496]: 2025-12-16 12:46:04.877 [INFO][3822] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:05.035985 containerd[1496]: 2025-12-16 12:46:04.911 [INFO][3822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--955ff858c--k74cv-eth0 whisker-955ff858c- calico-system d3573843-84a0-4e96-b493-87073cbb0cd2 911 0 2025-12-16 12:46:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:955ff858c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-955ff858c-k74cv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7475e021719 [] [] }} ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Namespace="calico-system" Pod="whisker-955ff858c-k74cv" WorkloadEndpoint="localhost-k8s-whisker--955ff858c--k74cv-" Dec 16 12:46:05.035985 containerd[1496]: 2025-12-16 12:46:04.911 [INFO][3822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Namespace="calico-system" Pod="whisker-955ff858c-k74cv" WorkloadEndpoint="localhost-k8s-whisker--955ff858c--k74cv-eth0" Dec 16 12:46:05.035985 containerd[1496]: 2025-12-16 12:46:04.976 [INFO][3837] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" HandleID="k8s-pod-network.8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Workload="localhost-k8s-whisker--955ff858c--k74cv-eth0" Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:04.976 [INFO][3837] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" HandleID="k8s-pod-network.8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Workload="localhost-k8s-whisker--955ff858c--k74cv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-955ff858c-k74cv", "timestamp":"2025-12-16 12:46:04.976064357 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:04.976 [INFO][3837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:04.976 [INFO][3837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:04.976 [INFO][3837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:04.987 [INFO][3837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" host="localhost" Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:04.993 [INFO][3837] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:04.998 [INFO][3837] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:05.000 [INFO][3837] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:05.002 [INFO][3837] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:05.036212 containerd[1496]: 2025-12-16 12:46:05.002 [INFO][3837] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" host="localhost" Dec 16 12:46:05.036408 containerd[1496]: 2025-12-16 12:46:05.004 [INFO][3837] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec Dec 16 12:46:05.036408 containerd[1496]: 2025-12-16 12:46:05.009 [INFO][3837] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" host="localhost" Dec 16 12:46:05.036408 containerd[1496]: 2025-12-16 12:46:05.014 [INFO][3837] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" host="localhost" Dec 16 12:46:05.036408 containerd[1496]: 2025-12-16 12:46:05.014 [INFO][3837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" host="localhost" Dec 16 12:46:05.036408 containerd[1496]: 2025-12-16 12:46:05.014 [INFO][3837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:05.036408 containerd[1496]: 2025-12-16 12:46:05.014 [INFO][3837] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" HandleID="k8s-pod-network.8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Workload="localhost-k8s-whisker--955ff858c--k74cv-eth0" Dec 16 12:46:05.036556 containerd[1496]: 2025-12-16 12:46:05.016 [INFO][3822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Namespace="calico-system" Pod="whisker-955ff858c-k74cv" WorkloadEndpoint="localhost-k8s-whisker--955ff858c--k74cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--955ff858c--k74cv-eth0", GenerateName:"whisker-955ff858c-", Namespace:"calico-system", SelfLink:"", UID:"d3573843-84a0-4e96-b493-87073cbb0cd2", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"955ff858c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-955ff858c-k74cv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7475e021719", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:05.036556 containerd[1496]: 2025-12-16 12:46:05.016 [INFO][3822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Namespace="calico-system" Pod="whisker-955ff858c-k74cv" WorkloadEndpoint="localhost-k8s-whisker--955ff858c--k74cv-eth0" Dec 16 12:46:05.036624 containerd[1496]: 2025-12-16 12:46:05.016 [INFO][3822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7475e021719 ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Namespace="calico-system" Pod="whisker-955ff858c-k74cv" WorkloadEndpoint="localhost-k8s-whisker--955ff858c--k74cv-eth0" Dec 16 12:46:05.036624 containerd[1496]: 2025-12-16 12:46:05.023 [INFO][3822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Namespace="calico-system" Pod="whisker-955ff858c-k74cv" WorkloadEndpoint="localhost-k8s-whisker--955ff858c--k74cv-eth0" Dec 16 12:46:05.036666 containerd[1496]: 2025-12-16 12:46:05.023 [INFO][3822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Namespace="calico-system" Pod="whisker-955ff858c-k74cv" WorkloadEndpoint="localhost-k8s-whisker--955ff858c--k74cv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--955ff858c--k74cv-eth0", GenerateName:"whisker-955ff858c-", Namespace:"calico-system", SelfLink:"", UID:"d3573843-84a0-4e96-b493-87073cbb0cd2", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"955ff858c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec", Pod:"whisker-955ff858c-k74cv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7475e021719", MAC:"52:a5:00:d1:00:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:05.036714 containerd[1496]: 2025-12-16 12:46:05.033 [INFO][3822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" Namespace="calico-system" Pod="whisker-955ff858c-k74cv" WorkloadEndpoint="localhost-k8s-whisker--955ff858c--k74cv-eth0" Dec 16 12:46:05.107034 containerd[1496]: time="2025-12-16T12:46:05.106925330Z" level=info msg="connecting to shim 8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec" address="unix:///run/containerd/s/c1dc1cd414c8084f988a43cf9cc026da253b6eebdc7f01ab5a037ac7ddc878a4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:05.144670 systemd[1]: Started cri-containerd-8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec.scope - libcontainer container 8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec. Dec 16 12:46:05.155334 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:46:05.174674 containerd[1496]: time="2025-12-16T12:46:05.174632757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955ff858c-k74cv,Uid:d3573843-84a0-4e96-b493-87073cbb0cd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a946643c6daff3b0cecfc5ff0d658921c16d8389b3bbd3ae15edf1f0dfd31ec\"" Dec 16 12:46:05.176937 containerd[1496]: time="2025-12-16T12:46:05.176911182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:46:05.186170 kubelet[2663]: E1216 12:46:05.186124 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:05.399046 containerd[1496]: time="2025-12-16T12:46:05.398683390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:05.400199 containerd[1496]: time="2025-12-16T12:46:05.400074805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:46:05.400199 containerd[1496]: time="2025-12-16T12:46:05.400103085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:46:05.404960 kubelet[2663]: E1216 12:46:05.404906 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:05.406543 kubelet[2663]: E1216 12:46:05.406503 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:05.417575 kubelet[2663]: E1216 12:46:05.417515 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e429b7d76e594d25b7b642aec08c2a17,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlzs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-955ff858c-k74cv_calico-system(d3573843-84a0-4e96-b493-87073cbb0cd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:05.420487 containerd[1496]: time="2025-12-16T12:46:05.420220227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:46:05.665423 containerd[1496]: time="2025-12-16T12:46:05.665313052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:05.683186 containerd[1496]: time="2025-12-16T12:46:05.683046528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:46:05.683186 containerd[1496]: time="2025-12-16T12:46:05.683099489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:46:05.683340 kubelet[2663]: E1216 12:46:05.683261 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:05.683340 kubelet[2663]: E1216 12:46:05.683316 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:05.683736 kubelet[2663]: E1216 12:46:05.683435 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlzs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-955ff858c-k74cv_calico-system(d3573843-84a0-4e96-b493-87073cbb0cd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:05.684883 kubelet[2663]: E1216 12:46:05.684843 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-955ff858c-k74cv" podUID="d3573843-84a0-4e96-b493-87073cbb0cd2" Dec 16 12:46:05.730236 systemd-networkd[1432]: vxlan.calico: Link UP Dec 16 12:46:05.730245 systemd-networkd[1432]: vxlan.calico: Gained carrier Dec 16 12:46:06.039093 kubelet[2663]: I1216 12:46:06.039038 2663 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb33e682-5dfd-47f9-8045-4205f896ddba" path="/var/lib/kubelet/pods/fb33e682-5dfd-47f9-8045-4205f896ddba/volumes" Dec 16 12:46:06.188745 kubelet[2663]: E1216 12:46:06.188613 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:06.191136 kubelet[2663]: E1216 12:46:06.191077 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-955ff858c-k74cv" podUID="d3573843-84a0-4e96-b493-87073cbb0cd2" Dec 16 12:46:06.458613 systemd-networkd[1432]: cali7475e021719: Gained IPv6LL Dec 16 12:46:07.034624 systemd-networkd[1432]: vxlan.calico: Gained IPv6LL Dec 16 12:46:11.353770 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:51826.service - OpenSSH per-connection server daemon (10.0.0.1:51826). Dec 16 12:46:11.415419 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 51826 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:11.417094 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:11.422912 systemd-logind[1482]: New session 8 of user core. Dec 16 12:46:11.426668 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:46:11.611462 sshd[4176]: Connection closed by 10.0.0.1 port 51826 Dec 16 12:46:11.611693 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:11.615571 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:51826.service: Deactivated successfully. Dec 16 12:46:11.617370 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:46:11.620063 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:46:11.621963 systemd-logind[1482]: Removed session 8. Dec 16 12:46:12.038849 kubelet[2663]: E1216 12:46:12.037592 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:12.039198 containerd[1496]: time="2025-12-16T12:46:12.037991418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76c76958cc-4g9pn,Uid:88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:12.039198 containerd[1496]: time="2025-12-16T12:46:12.038575944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nx5dk,Uid:1a4a4a45-5dac-4732-b17c-5369fab2f52d,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:12.039198 containerd[1496]: time="2025-12-16T12:46:12.038740305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddfc8d545-95q7h,Uid:4b03df5e-c87f-4925-bce5-1bc694fc45a1,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:12.039697 containerd[1496]: time="2025-12-16T12:46:12.039264230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x98rg,Uid:1f1fea87-d58f-4ba7-813c-87eb72bdb004,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:12.226739 systemd-networkd[1432]: cali724fe988d0a: Link UP Dec 16 12:46:12.227276 systemd-networkd[1432]: cali724fe988d0a: Gained carrier Dec 16 12:46:12.253689 containerd[1496]: 2025-12-16 12:46:12.130 [INFO][4201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0 calico-apiserver-7ddfc8d545- calico-apiserver 4b03df5e-c87f-4925-bce5-1bc694fc45a1 838 0 2025-12-16 12:45:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ddfc8d545 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7ddfc8d545-95q7h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali724fe988d0a [] [] }} ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-95q7h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-" Dec 16 12:46:12.253689 containerd[1496]: 2025-12-16 12:46:12.130 [INFO][4201] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-95q7h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" Dec 16 12:46:12.253689 containerd[1496]: 2025-12-16 12:46:12.174 [INFO][4266] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" HandleID="k8s-pod-network.41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Workload="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.174 [INFO][4266] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" HandleID="k8s-pod-network.41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Workload="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004380b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7ddfc8d545-95q7h", "timestamp":"2025-12-16 12:46:12.174541298 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.174 [INFO][4266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.174 [INFO][4266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.174 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.188 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" host="localhost" Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.196 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.202 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.206 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.208 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:12.253919 containerd[1496]: 2025-12-16 12:46:12.208 [INFO][4266] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" host="localhost" Dec 16 12:46:12.254109 containerd[1496]: 2025-12-16 12:46:12.210 [INFO][4266] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d Dec 16 12:46:12.254109 containerd[1496]: 2025-12-16 12:46:12.214 [INFO][4266] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" host="localhost" Dec 16 12:46:12.254109 containerd[1496]: 2025-12-16 12:46:12.220 [INFO][4266] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" host="localhost" Dec 16 12:46:12.254109 containerd[1496]: 2025-12-16 12:46:12.220 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" host="localhost" Dec 16 12:46:12.254109 containerd[1496]: 2025-12-16 12:46:12.221 [INFO][4266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:12.254109 containerd[1496]: 2025-12-16 12:46:12.221 [INFO][4266] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" HandleID="k8s-pod-network.41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Workload="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" Dec 16 12:46:12.254300 containerd[1496]: 2025-12-16 12:46:12.223 [INFO][4201] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-95q7h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0", GenerateName:"calico-apiserver-7ddfc8d545-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b03df5e-c87f-4925-bce5-1bc694fc45a1", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddfc8d545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7ddfc8d545-95q7h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali724fe988d0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:12.254359 containerd[1496]: 2025-12-16 12:46:12.223 [INFO][4201] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-95q7h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" Dec 16 12:46:12.254359 containerd[1496]: 2025-12-16 12:46:12.223 [INFO][4201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali724fe988d0a ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-95q7h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" Dec 16 12:46:12.254359 containerd[1496]: 2025-12-16 12:46:12.226 [INFO][4201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-95q7h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" Dec 16 12:46:12.254415 containerd[1496]: 2025-12-16 12:46:12.226 [INFO][4201] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-95q7h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0", GenerateName:"calico-apiserver-7ddfc8d545-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b03df5e-c87f-4925-bce5-1bc694fc45a1", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddfc8d545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d", Pod:"calico-apiserver-7ddfc8d545-95q7h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali724fe988d0a", MAC:"32:fa:07:8c:46:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:12.254816 containerd[1496]: 2025-12-16 12:46:12.250 [INFO][4201] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-95q7h" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--95q7h-eth0" Dec 16 12:46:12.348076 systemd-networkd[1432]: calic06e60ae466: Link UP Dec 16 12:46:12.349442 systemd-networkd[1432]: calic06e60ae466: Gained carrier Dec 16 12:46:12.374191 containerd[1496]: 2025-12-16 12:46:12.117 [INFO][4197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0 coredns-674b8bbfcf- kube-system 1a4a4a45-5dac-4732-b17c-5369fab2f52d 829 0 2025-12-16 12:45:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nx5dk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic06e60ae466 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Namespace="kube-system" Pod="coredns-674b8bbfcf-nx5dk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nx5dk-" Dec 16 12:46:12.374191 containerd[1496]: 2025-12-16 12:46:12.117 [INFO][4197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Namespace="kube-system" Pod="coredns-674b8bbfcf-nx5dk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" Dec 16 12:46:12.374191 containerd[1496]: 2025-12-16 12:46:12.175 [INFO][4254] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" HandleID="k8s-pod-network.b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Workload="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.175 [INFO][4254] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" HandleID="k8s-pod-network.b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Workload="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004e60a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nx5dk", "timestamp":"2025-12-16 12:46:12.175107623 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.175 [INFO][4254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.221 [INFO][4254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.221 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.289 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" host="localhost" Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.294 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.302 [INFO][4254] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.304 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.307 [INFO][4254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:12.374377 containerd[1496]: 2025-12-16 12:46:12.307 [INFO][4254] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" host="localhost" Dec 16 12:46:12.375082 containerd[1496]: 2025-12-16 12:46:12.308 [INFO][4254] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc Dec 16 12:46:12.375082 containerd[1496]: 2025-12-16 12:46:12.330 [INFO][4254] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" host="localhost" Dec 16 12:46:12.375082 containerd[1496]: 2025-12-16 12:46:12.341 [INFO][4254] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" host="localhost" Dec 16 12:46:12.375082 containerd[1496]: 2025-12-16 12:46:12.341 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" host="localhost" Dec 16 12:46:12.375082 containerd[1496]: 2025-12-16 12:46:12.341 [INFO][4254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:12.375082 containerd[1496]: 2025-12-16 12:46:12.341 [INFO][4254] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" HandleID="k8s-pod-network.b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Workload="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" Dec 16 12:46:12.375226 containerd[1496]: 2025-12-16 12:46:12.345 [INFO][4197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Namespace="kube-system" Pod="coredns-674b8bbfcf-nx5dk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1a4a4a45-5dac-4732-b17c-5369fab2f52d", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nx5dk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic06e60ae466", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:12.375288 containerd[1496]: 2025-12-16 12:46:12.345 [INFO][4197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Namespace="kube-system" Pod="coredns-674b8bbfcf-nx5dk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" Dec 16 12:46:12.375288 containerd[1496]: 2025-12-16 12:46:12.345 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic06e60ae466 ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Namespace="kube-system" Pod="coredns-674b8bbfcf-nx5dk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" Dec 16 12:46:12.375288 containerd[1496]: 2025-12-16 12:46:12.349 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Namespace="kube-system" Pod="coredns-674b8bbfcf-nx5dk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" Dec 16 12:46:12.375354 containerd[1496]: 2025-12-16 12:46:12.350 [INFO][4197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Namespace="kube-system" Pod="coredns-674b8bbfcf-nx5dk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1a4a4a45-5dac-4732-b17c-5369fab2f52d", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc", Pod:"coredns-674b8bbfcf-nx5dk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic06e60ae466", MAC:"56:c8:ce:69:88:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:12.375354 containerd[1496]: 2025-12-16 12:46:12.371 [INFO][4197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" Namespace="kube-system" Pod="coredns-674b8bbfcf-nx5dk" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nx5dk-eth0" Dec 16 12:46:12.433475 containerd[1496]: time="2025-12-16T12:46:12.433098126Z" level=info msg="connecting to shim 41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d" address="unix:///run/containerd/s/7b0a5bb07a7693e6c45d53b54f833b8550771156cf9555ec182d6b283824fd96" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:12.454672 systemd[1]: Started cri-containerd-41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d.scope - libcontainer container 41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d. Dec 16 12:46:12.473898 systemd-networkd[1432]: cali077efa5fa92: Link UP Dec 16 12:46:12.474318 systemd-networkd[1432]: cali077efa5fa92: Gained carrier Dec 16 12:46:12.489134 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:46:12.494786 containerd[1496]: time="2025-12-16T12:46:12.494583324Z" level=info msg="connecting to shim b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc" address="unix:///run/containerd/s/f96658daff51a0290b8dfceaf3a4fa8a0834dbe8dacf101f361c56c61133909d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.128 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0 calico-kube-controllers-76c76958cc- calico-system 88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9 834 0 2025-12-16 12:45:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76c76958cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-76c76958cc-4g9pn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali077efa5fa92 [] [] }} ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Namespace="calico-system" Pod="calico-kube-controllers-76c76958cc-4g9pn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.129 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Namespace="calico-system" Pod="calico-kube-controllers-76c76958cc-4g9pn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.179 [INFO][4260] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" HandleID="k8s-pod-network.2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Workload="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.179 [INFO][4260] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" HandleID="k8s-pod-network.2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Workload="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-76c76958cc-4g9pn", "timestamp":"2025-12-16 12:46:12.179793546 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.179 [INFO][4260] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.341 [INFO][4260] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.342 [INFO][4260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.390 [INFO][4260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" host="localhost" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.425 [INFO][4260] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.451 [INFO][4260] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.453 [INFO][4260] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.455 [INFO][4260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.455 [INFO][4260] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" host="localhost" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.458 [INFO][4260] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.463 [INFO][4260] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" host="localhost" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.468 [INFO][4260] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" host="localhost" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.468 [INFO][4260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" host="localhost" Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.468 [INFO][4260] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:12.495758 containerd[1496]: 2025-12-16 12:46:12.468 [INFO][4260] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" HandleID="k8s-pod-network.2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Workload="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" Dec 16 12:46:12.496399 containerd[1496]: 2025-12-16 12:46:12.471 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Namespace="calico-system" Pod="calico-kube-controllers-76c76958cc-4g9pn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0", GenerateName:"calico-kube-controllers-76c76958cc-", Namespace:"calico-system", SelfLink:"", UID:"88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76c76958cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-76c76958cc-4g9pn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali077efa5fa92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:12.496399 containerd[1496]: 2025-12-16 12:46:12.471 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Namespace="calico-system" Pod="calico-kube-controllers-76c76958cc-4g9pn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" Dec 16 12:46:12.496399 containerd[1496]: 2025-12-16 12:46:12.471 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali077efa5fa92 ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Namespace="calico-system" Pod="calico-kube-controllers-76c76958cc-4g9pn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" Dec 16 12:46:12.496399 containerd[1496]: 2025-12-16 12:46:12.473 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Namespace="calico-system" Pod="calico-kube-controllers-76c76958cc-4g9pn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" Dec 16 12:46:12.496399 containerd[1496]: 2025-12-16 12:46:12.474 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Namespace="calico-system" Pod="calico-kube-controllers-76c76958cc-4g9pn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0", GenerateName:"calico-kube-controllers-76c76958cc-", Namespace:"calico-system", SelfLink:"", UID:"88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76c76958cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d", Pod:"calico-kube-controllers-76c76958cc-4g9pn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali077efa5fa92", MAC:"aa:fe:32:9b:e2:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:12.496399 containerd[1496]: 2025-12-16 12:46:12.491 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" Namespace="calico-system" Pod="calico-kube-controllers-76c76958cc-4g9pn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76c76958cc--4g9pn-eth0" Dec 16 12:46:12.521650 systemd[1]: Started cri-containerd-b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc.scope - libcontainer container b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc. Dec 16 12:46:12.522720 containerd[1496]: time="2025-12-16T12:46:12.522681700Z" level=info msg="connecting to shim 2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d" address="unix:///run/containerd/s/6c6669f96257185f31d79addab85184f27b6cca1cf5f66b9f98bf343b4ce560f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:12.529228 containerd[1496]: time="2025-12-16T12:46:12.529192399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddfc8d545-95q7h,Uid:4b03df5e-c87f-4925-bce5-1bc694fc45a1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"41daca55479e38996bef5ee196bdb8aec9bb8874cc29d7a3e40ca784b3f41e2d\"" Dec 16 12:46:12.533057 containerd[1496]: time="2025-12-16T12:46:12.533016713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:12.543884 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:46:12.557610 systemd[1]: Started cri-containerd-2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d.scope - libcontainer container 2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d. Dec 16 12:46:12.564274 systemd-networkd[1432]: calic46675f35ad: Link UP Dec 16 12:46:12.566874 systemd-networkd[1432]: calic46675f35ad: Gained carrier Dec 16 12:46:12.576386 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.146 [INFO][4226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--x98rg-eth0 goldmane-666569f655- calico-system 1f1fea87-d58f-4ba7-813c-87eb72bdb004 836 0 2025-12-16 12:45:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-x98rg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic46675f35ad [] [] }} ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Namespace="calico-system" Pod="goldmane-666569f655-x98rg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x98rg-" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.146 [INFO][4226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Namespace="calico-system" Pod="goldmane-666569f655-x98rg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x98rg-eth0" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.193 [INFO][4274] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" HandleID="k8s-pod-network.c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Workload="localhost-k8s-goldmane--666569f655--x98rg-eth0" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.193 [INFO][4274] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" HandleID="k8s-pod-network.c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Workload="localhost-k8s-goldmane--666569f655--x98rg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-x98rg", "timestamp":"2025-12-16 12:46:12.193595991 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.194 [INFO][4274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.468 [INFO][4274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.469 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.491 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" host="localhost" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.497 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.534 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.536 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.540 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.540 [INFO][4274] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" host="localhost" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.543 [INFO][4274] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.548 [INFO][4274] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" host="localhost" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.555 [INFO][4274] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" host="localhost" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.555 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" host="localhost" Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.555 [INFO][4274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:12.593482 containerd[1496]: 2025-12-16 12:46:12.555 [INFO][4274] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" HandleID="k8s-pod-network.c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Workload="localhost-k8s-goldmane--666569f655--x98rg-eth0" Dec 16 12:46:12.594038 containerd[1496]: 2025-12-16 12:46:12.558 [INFO][4226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Namespace="calico-system" Pod="goldmane-666569f655-x98rg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x98rg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--x98rg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1f1fea87-d58f-4ba7-813c-87eb72bdb004", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-x98rg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic46675f35ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:12.594038 containerd[1496]: 2025-12-16 12:46:12.558 [INFO][4226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Namespace="calico-system" Pod="goldmane-666569f655-x98rg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x98rg-eth0" Dec 16 12:46:12.594038 containerd[1496]: 2025-12-16 12:46:12.559 [INFO][4226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic46675f35ad ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Namespace="calico-system" Pod="goldmane-666569f655-x98rg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x98rg-eth0" Dec 16 12:46:12.594038 containerd[1496]: 2025-12-16 12:46:12.567 [INFO][4226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Namespace="calico-system" Pod="goldmane-666569f655-x98rg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x98rg-eth0" Dec 16 12:46:12.594038 containerd[1496]: 2025-12-16 12:46:12.570 [INFO][4226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Namespace="calico-system" Pod="goldmane-666569f655-x98rg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x98rg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--x98rg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1f1fea87-d58f-4ba7-813c-87eb72bdb004", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab", Pod:"goldmane-666569f655-x98rg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic46675f35ad", MAC:"c6:65:2e:91:15:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:12.594038 containerd[1496]: 2025-12-16 12:46:12.584 [INFO][4226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" Namespace="calico-system" Pod="goldmane-666569f655-x98rg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--x98rg-eth0" Dec 16 12:46:12.610862 containerd[1496]: time="2025-12-16T12:46:12.610745659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nx5dk,Uid:1a4a4a45-5dac-4732-b17c-5369fab2f52d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc\"" Dec 16 12:46:12.613186 kubelet[2663]: E1216 12:46:12.613027 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:12.620291 containerd[1496]: time="2025-12-16T12:46:12.620249266Z" level=info msg="connecting to shim c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab" address="unix:///run/containerd/s/97ed74931f0f76300812d4e0b18cb8c24952dd1151eef7cb7f09c23115f32033" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:12.623823 containerd[1496]: time="2025-12-16T12:46:12.623754857Z" level=info msg="CreateContainer within sandbox \"b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:46:12.629888 containerd[1496]: time="2025-12-16T12:46:12.629848273Z" level=info msg="Container 607a96fd25a0ee0757c3755ac2e7b210aa0e34ef0e040f0825a040363c61c580: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:12.637956 containerd[1496]: time="2025-12-16T12:46:12.637909906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76c76958cc-4g9pn,Uid:88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9,Namespace:calico-system,Attempt:0,} returns sandbox id \"2bb1cdfa0e99290cf64cde97706ddd4354e4bc4bc6733fd2ebe2e8d467c80a0d\"" Dec 16 12:46:12.638370 containerd[1496]: time="2025-12-16T12:46:12.638291469Z" level=info msg="CreateContainer within sandbox \"b1a1e64bd1eec3e52eae73fc774b5f5a594dea649cd985d1671e8635d4e584bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"607a96fd25a0ee0757c3755ac2e7b210aa0e34ef0e040f0825a040363c61c580\"" Dec 16 12:46:12.638992 containerd[1496]: time="2025-12-16T12:46:12.638957636Z" level=info msg="StartContainer for \"607a96fd25a0ee0757c3755ac2e7b210aa0e34ef0e040f0825a040363c61c580\"" Dec 16 12:46:12.640751 containerd[1496]: time="2025-12-16T12:46:12.640584650Z" level=info msg="connecting to shim 607a96fd25a0ee0757c3755ac2e7b210aa0e34ef0e040f0825a040363c61c580" address="unix:///run/containerd/s/f96658daff51a0290b8dfceaf3a4fa8a0834dbe8dacf101f361c56c61133909d" protocol=ttrpc version=3 Dec 16 12:46:12.664633 systemd[1]: Started cri-containerd-607a96fd25a0ee0757c3755ac2e7b210aa0e34ef0e040f0825a040363c61c580.scope - libcontainer container 607a96fd25a0ee0757c3755ac2e7b210aa0e34ef0e040f0825a040363c61c580. Dec 16 12:46:12.669036 systemd[1]: Started cri-containerd-c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab.scope - libcontainer container c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab. Dec 16 12:46:12.682953 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:46:12.698865 containerd[1496]: time="2025-12-16T12:46:12.698825379Z" level=info msg="StartContainer for \"607a96fd25a0ee0757c3755ac2e7b210aa0e34ef0e040f0825a040363c61c580\" returns successfully" Dec 16 12:46:12.721096 containerd[1496]: time="2025-12-16T12:46:12.721027141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-x98rg,Uid:1f1fea87-d58f-4ba7-813c-87eb72bdb004,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6e269064e53bc4498f639c4b6c8332f7a264cad44c9a6fc641a55f1cdaafcab\"" Dec 16 12:46:12.755683 containerd[1496]: time="2025-12-16T12:46:12.755640455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:12.756834 containerd[1496]: time="2025-12-16T12:46:12.756696665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:12.756834 containerd[1496]: time="2025-12-16T12:46:12.756786465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:46:12.756990 kubelet[2663]: E1216 12:46:12.756926 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:12.757063 kubelet[2663]: E1216 12:46:12.756993 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:12.757624 kubelet[2663]: E1216 12:46:12.757340 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6hqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddfc8d545-95q7h_calico-apiserver(4b03df5e-c87f-4925-bce5-1bc694fc45a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:12.757867 containerd[1496]: time="2025-12-16T12:46:12.757832275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:46:12.758933 kubelet[2663]: E1216 12:46:12.758892 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" podUID="4b03df5e-c87f-4925-bce5-1bc694fc45a1" Dec 16 12:46:12.962813 containerd[1496]: time="2025-12-16T12:46:12.962702735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:12.963701 containerd[1496]: time="2025-12-16T12:46:12.963667864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:46:12.963747 containerd[1496]: time="2025-12-16T12:46:12.963706224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:46:12.964061 kubelet[2663]: E1216 12:46:12.963887 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:12.964061 kubelet[2663]: E1216 12:46:12.963937 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:12.964272 kubelet[2663]: E1216 12:46:12.964205 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5djxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76c76958cc-4g9pn_calico-system(88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:12.964563 containerd[1496]: time="2025-12-16T12:46:12.964509352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:46:12.965723 kubelet[2663]: E1216 12:46:12.965547 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" podUID="88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9" Dec 16 12:46:13.185962 containerd[1496]: time="2025-12-16T12:46:13.185918083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:13.187369 containerd[1496]: time="2025-12-16T12:46:13.187291655Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:46:13.187439 containerd[1496]: time="2025-12-16T12:46:13.187360575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:46:13.187594 kubelet[2663]: E1216 12:46:13.187544 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:13.187826 kubelet[2663]: E1216 12:46:13.187605 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:13.187826 kubelet[2663]: E1216 12:46:13.187768 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6qtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x98rg_calico-system(1f1fea87-d58f-4ba7-813c-87eb72bdb004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:13.189086 kubelet[2663]: E1216 12:46:13.189021 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x98rg" podUID="1f1fea87-d58f-4ba7-813c-87eb72bdb004" Dec 16 12:46:13.206674 kubelet[2663]: E1216 12:46:13.206641 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:13.209801 kubelet[2663]: E1216 12:46:13.209741 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x98rg" podUID="1f1fea87-d58f-4ba7-813c-87eb72bdb004" Dec 16 12:46:13.213182 kubelet[2663]: E1216 12:46:13.212644 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" podUID="88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9" Dec 16 12:46:13.214582 kubelet[2663]: E1216 12:46:13.214549 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" podUID="4b03df5e-c87f-4925-bce5-1bc694fc45a1" Dec 16 12:46:13.218545 kubelet[2663]: I1216 12:46:13.218485 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nx5dk" podStartSLOduration=38.218471771 podStartE2EDuration="38.218471771s" podCreationTimestamp="2025-12-16 12:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:46:13.217759445 +0000 UTC m=+43.286950905" watchObservedRunningTime="2025-12-16 12:46:13.218471771 +0000 UTC m=+43.287663231" Dec 16 12:46:13.818601 systemd-networkd[1432]: cali077efa5fa92: Gained IPv6LL Dec 16 12:46:13.882609 systemd-networkd[1432]: calic46675f35ad: Gained IPv6LL Dec 16 12:46:14.010575 systemd-networkd[1432]: cali724fe988d0a: Gained IPv6LL Dec 16 12:46:14.216220 kubelet[2663]: E1216 12:46:14.216154 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:14.216955 kubelet[2663]: E1216 12:46:14.216185 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x98rg" podUID="1f1fea87-d58f-4ba7-813c-87eb72bdb004" Dec 16 12:46:14.217232 kubelet[2663]: E1216 12:46:14.217180 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" podUID="88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9" Dec 16 12:46:14.217376 kubelet[2663]: E1216 12:46:14.217333 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" podUID="4b03df5e-c87f-4925-bce5-1bc694fc45a1" Dec 16 12:46:14.266619 systemd-networkd[1432]: calic06e60ae466: Gained IPv6LL Dec 16 12:46:15.036250 kubelet[2663]: E1216 12:46:15.036000 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:15.036492 containerd[1496]: time="2025-12-16T12:46:15.036418796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-575rn,Uid:bd9dfc7d-59c3-4082-b547-c4b54eeb1dee,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:15.037633 containerd[1496]: time="2025-12-16T12:46:15.036753199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddfc8d545-2gnqr,Uid:b7cd9aef-f007-4e21-92a2-b7da7e34e076,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:15.037633 containerd[1496]: time="2025-12-16T12:46:15.036972760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n7ggc,Uid:fd592319-f307-4a63-b9da-7593332cc589,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:15.184287 systemd-networkd[1432]: calic392a8adfa2: Link UP Dec 16 12:46:15.184782 systemd-networkd[1432]: calic392a8adfa2: Gained carrier Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.098 [INFO][4555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0 calico-apiserver-7ddfc8d545- calico-apiserver b7cd9aef-f007-4e21-92a2-b7da7e34e076 837 0 2025-12-16 12:45:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ddfc8d545 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7ddfc8d545-2gnqr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic392a8adfa2 [] [] }} ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-2gnqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.098 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-2gnqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.136 [INFO][4600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" HandleID="k8s-pod-network.960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Workload="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.136 [INFO][4600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" HandleID="k8s-pod-network.960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Workload="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7ddfc8d545-2gnqr", "timestamp":"2025-12-16 12:46:15.136624325 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.136 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.137 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.137 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.149 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" host="localhost" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.154 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.159 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.161 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.165 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.165 [INFO][4600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" host="localhost" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.168 [INFO][4600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.172 [INFO][4600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" host="localhost" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.178 [INFO][4600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" host="localhost" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.178 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" host="localhost" Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.178 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:15.203472 containerd[1496]: 2025-12-16 12:46:15.178 [INFO][4600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" HandleID="k8s-pod-network.960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Workload="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" Dec 16 12:46:15.204003 containerd[1496]: 2025-12-16 12:46:15.181 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-2gnqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0", GenerateName:"calico-apiserver-7ddfc8d545-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7cd9aef-f007-4e21-92a2-b7da7e34e076", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddfc8d545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7ddfc8d545-2gnqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic392a8adfa2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:15.204003 containerd[1496]: 2025-12-16 12:46:15.181 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-2gnqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" Dec 16 12:46:15.204003 containerd[1496]: 2025-12-16 12:46:15.181 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic392a8adfa2 ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-2gnqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" Dec 16 12:46:15.204003 containerd[1496]: 2025-12-16 12:46:15.185 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-2gnqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" Dec 16 12:46:15.204003 containerd[1496]: 2025-12-16 12:46:15.185 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-2gnqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0", GenerateName:"calico-apiserver-7ddfc8d545-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7cd9aef-f007-4e21-92a2-b7da7e34e076", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ddfc8d545", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe", Pod:"calico-apiserver-7ddfc8d545-2gnqr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic392a8adfa2", MAC:"a6:fe:e4:2d:4c:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:15.204003 containerd[1496]: 2025-12-16 12:46:15.199 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" Namespace="calico-apiserver" Pod="calico-apiserver-7ddfc8d545-2gnqr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ddfc8d545--2gnqr-eth0" Dec 16 12:46:15.227731 kubelet[2663]: E1216 12:46:15.227691 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:15.240739 containerd[1496]: time="2025-12-16T12:46:15.240631167Z" level=info msg="connecting to shim 960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe" address="unix:///run/containerd/s/1a4326fdb12aed614dcb38e7d101c715a6a8ea5f586a8322a86b7fb9beeca161" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:15.270690 systemd[1]: Started cri-containerd-960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe.scope - libcontainer container 960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe. Dec 16 12:46:15.287618 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:46:15.294757 systemd-networkd[1432]: cali53ef37c9db5: Link UP Dec 16 12:46:15.295440 systemd-networkd[1432]: cali53ef37c9db5: Gained carrier Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.102 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--575rn-eth0 csi-node-driver- calico-system bd9dfc7d-59c3-4082-b547-c4b54eeb1dee 737 0 2025-12-16 12:45:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-575rn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali53ef37c9db5 [] [] }} ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Namespace="calico-system" Pod="csi-node-driver-575rn" WorkloadEndpoint="localhost-k8s-csi--node--driver--575rn-" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.103 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Namespace="calico-system" Pod="csi-node-driver-575rn" WorkloadEndpoint="localhost-k8s-csi--node--driver--575rn-eth0" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.138 [INFO][4612] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" HandleID="k8s-pod-network.bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Workload="localhost-k8s-csi--node--driver--575rn-eth0" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.138 [INFO][4612] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" HandleID="k8s-pod-network.bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Workload="localhost-k8s-csi--node--driver--575rn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3110), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-575rn", "timestamp":"2025-12-16 12:46:15.138218819 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.138 [INFO][4612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.178 [INFO][4612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.178 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.251 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" host="localhost" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.259 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.264 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.266 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.269 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.269 [INFO][4612] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" host="localhost" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.271 [INFO][4612] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70 Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.276 [INFO][4612] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" host="localhost" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.283 [INFO][4612] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" host="localhost" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.283 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" host="localhost" Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.284 [INFO][4612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:15.315540 containerd[1496]: 2025-12-16 12:46:15.284 [INFO][4612] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" HandleID="k8s-pod-network.bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Workload="localhost-k8s-csi--node--driver--575rn-eth0" Dec 16 12:46:15.316682 containerd[1496]: 2025-12-16 12:46:15.289 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Namespace="calico-system" Pod="csi-node-driver-575rn" WorkloadEndpoint="localhost-k8s-csi--node--driver--575rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--575rn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd9dfc7d-59c3-4082-b547-c4b54eeb1dee", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-575rn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53ef37c9db5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:15.316682 containerd[1496]: 2025-12-16 12:46:15.289 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Namespace="calico-system" Pod="csi-node-driver-575rn" WorkloadEndpoint="localhost-k8s-csi--node--driver--575rn-eth0" Dec 16 12:46:15.316682 containerd[1496]: 2025-12-16 12:46:15.290 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53ef37c9db5 ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Namespace="calico-system" Pod="csi-node-driver-575rn" WorkloadEndpoint="localhost-k8s-csi--node--driver--575rn-eth0" Dec 16 12:46:15.316682 containerd[1496]: 2025-12-16 12:46:15.295 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Namespace="calico-system" Pod="csi-node-driver-575rn" WorkloadEndpoint="localhost-k8s-csi--node--driver--575rn-eth0" Dec 16 12:46:15.316682 containerd[1496]: 2025-12-16 12:46:15.296 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Namespace="calico-system" Pod="csi-node-driver-575rn" WorkloadEndpoint="localhost-k8s-csi--node--driver--575rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--575rn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd9dfc7d-59c3-4082-b547-c4b54eeb1dee", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70", Pod:"csi-node-driver-575rn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53ef37c9db5", MAC:"3a:2f:12:65:c5:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:15.316682 containerd[1496]: 2025-12-16 12:46:15.307 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" Namespace="calico-system" Pod="csi-node-driver-575rn" WorkloadEndpoint="localhost-k8s-csi--node--driver--575rn-eth0" Dec 16 12:46:15.328975 containerd[1496]: time="2025-12-16T12:46:15.328905315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ddfc8d545-2gnqr,Uid:b7cd9aef-f007-4e21-92a2-b7da7e34e076,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"960f6cdadea4829d75293ac115fd83f9c6d9d34d10983c37b654bfec51174ffe\"" Dec 16 12:46:15.331342 containerd[1496]: time="2025-12-16T12:46:15.331225695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:15.341630 containerd[1496]: time="2025-12-16T12:46:15.341590303Z" level=info msg="connecting to shim bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70" address="unix:///run/containerd/s/1313aa6fd289fa6dfb6b881378af35b7f523807347f0b72c3f7a5b00155bc5ee" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:15.372751 systemd[1]: Started cri-containerd-bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70.scope - libcontainer container bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70. Dec 16 12:46:15.388743 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:46:15.394574 systemd-networkd[1432]: cali7a6790ed1fe: Link UP Dec 16 12:46:15.394717 systemd-networkd[1432]: cali7a6790ed1fe: Gained carrier Dec 16 12:46:15.411503 containerd[1496]: time="2025-12-16T12:46:15.411396055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-575rn,Uid:bd9dfc7d-59c3-4082-b547-c4b54eeb1dee,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc17d34ce629a235eb7c30ca05454ce8939f8f0fd9421351017a0733911b8b70\"" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.104 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0 coredns-674b8bbfcf- kube-system fd592319-f307-4a63-b9da-7593332cc589 835 0 2025-12-16 12:45:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-n7ggc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7a6790ed1fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7ggc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n7ggc-" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.104 [INFO][4571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7ggc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.139 [INFO][4606] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" HandleID="k8s-pod-network.8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Workload="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.139 [INFO][4606] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" HandleID="k8s-pod-network.8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Workload="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c580), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-n7ggc", "timestamp":"2025-12-16 12:46:15.139762392 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.139 [INFO][4606] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.284 [INFO][4606] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.284 [INFO][4606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.351 [INFO][4606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" host="localhost" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.359 [INFO][4606] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.366 [INFO][4606] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.368 [INFO][4606] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.370 [INFO][4606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.370 [INFO][4606] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" host="localhost" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.373 [INFO][4606] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63 Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.377 [INFO][4606] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" host="localhost" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.386 [INFO][4606] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" host="localhost" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.386 [INFO][4606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" host="localhost" Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.386 [INFO][4606] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:15.415321 containerd[1496]: 2025-12-16 12:46:15.386 [INFO][4606] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" HandleID="k8s-pod-network.8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Workload="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" Dec 16 12:46:15.416431 containerd[1496]: 2025-12-16 12:46:15.391 [INFO][4571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7ggc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd592319-f307-4a63-b9da-7593332cc589", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-n7ggc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a6790ed1fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:15.416431 containerd[1496]: 2025-12-16 12:46:15.391 [INFO][4571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7ggc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" Dec 16 12:46:15.416431 containerd[1496]: 2025-12-16 12:46:15.391 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a6790ed1fe ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7ggc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" Dec 16 12:46:15.416431 containerd[1496]: 2025-12-16 12:46:15.397 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7ggc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" Dec 16 12:46:15.416431 containerd[1496]: 2025-12-16 12:46:15.398 [INFO][4571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7ggc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fd592319-f307-4a63-b9da-7593332cc589", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63", Pod:"coredns-674b8bbfcf-n7ggc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a6790ed1fe", MAC:"d2:76:fe:c3:6d:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:15.416431 containerd[1496]: 2025-12-16 12:46:15.410 [INFO][4571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" Namespace="kube-system" Pod="coredns-674b8bbfcf-n7ggc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n7ggc-eth0" Dec 16 12:46:15.449779 containerd[1496]: time="2025-12-16T12:46:15.449665979Z" level=info msg="connecting to shim 8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63" address="unix:///run/containerd/s/b106ab901f25c7815839d74cb420d1859ad00398aa5b3633d8ce819cd744a210" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:15.470651 systemd[1]: Started cri-containerd-8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63.scope - libcontainer container 8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63. Dec 16 12:46:15.482319 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:46:15.504063 containerd[1496]: time="2025-12-16T12:46:15.503949279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n7ggc,Uid:fd592319-f307-4a63-b9da-7593332cc589,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63\"" Dec 16 12:46:15.504691 kubelet[2663]: E1216 12:46:15.504668 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:15.509293 containerd[1496]: time="2025-12-16T12:46:15.509255004Z" level=info msg="CreateContainer within sandbox \"8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:46:15.517272 containerd[1496]: time="2025-12-16T12:46:15.516687627Z" level=info msg="Container daac513dd94dd22496869b4a5537e6215a15a5945030bccf7aa5882e88043e6f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:15.521780 containerd[1496]: time="2025-12-16T12:46:15.521734270Z" level=info msg="CreateContainer within sandbox \"8b65d2e0048afffbac46745c119b4474c84f2cf7c51f4dbe27f7c20f402d1d63\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"daac513dd94dd22496869b4a5537e6215a15a5945030bccf7aa5882e88043e6f\"" Dec 16 12:46:15.524135 containerd[1496]: time="2025-12-16T12:46:15.522534717Z" level=info msg="StartContainer for \"daac513dd94dd22496869b4a5537e6215a15a5945030bccf7aa5882e88043e6f\"" Dec 16 12:46:15.524135 containerd[1496]: time="2025-12-16T12:46:15.523356484Z" level=info msg="connecting to shim daac513dd94dd22496869b4a5537e6215a15a5945030bccf7aa5882e88043e6f" address="unix:///run/containerd/s/b106ab901f25c7815839d74cb420d1859ad00398aa5b3633d8ce819cd744a210" protocol=ttrpc version=3 Dec 16 12:46:15.547661 systemd[1]: Started cri-containerd-daac513dd94dd22496869b4a5537e6215a15a5945030bccf7aa5882e88043e6f.scope - libcontainer container daac513dd94dd22496869b4a5537e6215a15a5945030bccf7aa5882e88043e6f. Dec 16 12:46:15.574935 containerd[1496]: time="2025-12-16T12:46:15.574899001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:15.575926 containerd[1496]: time="2025-12-16T12:46:15.575871969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:15.576030 containerd[1496]: time="2025-12-16T12:46:15.576012810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:46:15.576315 kubelet[2663]: E1216 12:46:15.576281 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:15.576383 kubelet[2663]: E1216 12:46:15.576329 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:15.576724 containerd[1496]: time="2025-12-16T12:46:15.576700776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:46:15.576982 kubelet[2663]: E1216 12:46:15.576927 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgwdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddfc8d545-2gnqr_calico-apiserver(b7cd9aef-f007-4e21-92a2-b7da7e34e076): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:15.578964 kubelet[2663]: E1216 12:46:15.578890 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" podUID="b7cd9aef-f007-4e21-92a2-b7da7e34e076" Dec 16 12:46:15.581785 containerd[1496]: time="2025-12-16T12:46:15.581701459Z" level=info msg="StartContainer for \"daac513dd94dd22496869b4a5537e6215a15a5945030bccf7aa5882e88043e6f\" returns successfully" Dec 16 12:46:15.789913 containerd[1496]: time="2025-12-16T12:46:15.789860023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:15.791145 containerd[1496]: time="2025-12-16T12:46:15.791086394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:46:15.791218 containerd[1496]: time="2025-12-16T12:46:15.791085354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:46:15.791490 kubelet[2663]: E1216 12:46:15.791450 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:15.791556 kubelet[2663]: E1216 12:46:15.791500 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:15.791709 kubelet[2663]: E1216 12:46:15.791669 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x898d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-575rn_calico-system(bd9dfc7d-59c3-4082-b547-c4b54eeb1dee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:15.793768 containerd[1496]: time="2025-12-16T12:46:15.793737176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:46:16.017650 containerd[1496]: time="2025-12-16T12:46:16.017562551Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:16.018684 containerd[1496]: time="2025-12-16T12:46:16.018614600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:46:16.018684 containerd[1496]: time="2025-12-16T12:46:16.018655480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:46:16.018993 kubelet[2663]: E1216 12:46:16.018952 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:16.019048 kubelet[2663]: E1216 12:46:16.019003 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:16.019162 kubelet[2663]: E1216 12:46:16.019125 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x898d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-575rn_calico-system(bd9dfc7d-59c3-4082-b547-c4b54eeb1dee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:16.020290 kubelet[2663]: E1216 12:46:16.020227 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:46:16.226608 kubelet[2663]: E1216 12:46:16.226568 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:16.240329 kubelet[2663]: E1216 12:46:16.240244 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" podUID="b7cd9aef-f007-4e21-92a2-b7da7e34e076" Dec 16 12:46:16.242304 kubelet[2663]: E1216 12:46:16.242155 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:46:16.249069 kubelet[2663]: I1216 12:46:16.249014 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n7ggc" podStartSLOduration=41.248998232 podStartE2EDuration="41.248998232s" podCreationTimestamp="2025-12-16 12:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:46:16.248815231 +0000 UTC m=+46.318006691" watchObservedRunningTime="2025-12-16 12:46:16.248998232 +0000 UTC m=+46.318189692" Dec 16 12:46:16.314659 systemd-networkd[1432]: calic392a8adfa2: Gained IPv6LL Dec 16 12:46:16.622973 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:51842.service - OpenSSH per-connection server daemon (10.0.0.1:51842). Dec 16 12:46:16.678828 sshd[4834]: Accepted publickey for core from 10.0.0.1 port 51842 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:16.680411 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:16.684594 systemd-logind[1482]: New session 9 of user core. Dec 16 12:46:16.690640 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:46:16.762644 systemd-networkd[1432]: cali7a6790ed1fe: Gained IPv6LL Dec 16 12:46:16.910495 sshd[4837]: Connection closed by 10.0.0.1 port 51842 Dec 16 12:46:16.910799 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:16.915999 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:51842.service: Deactivated successfully. Dec 16 12:46:16.918003 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:46:16.918698 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:46:16.919622 systemd-logind[1482]: Removed session 9. Dec 16 12:46:17.019597 systemd-networkd[1432]: cali53ef37c9db5: Gained IPv6LL Dec 16 12:46:17.244435 kubelet[2663]: E1216 12:46:17.244322 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:17.246114 kubelet[2663]: E1216 12:46:17.244736 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" podUID="b7cd9aef-f007-4e21-92a2-b7da7e34e076" Dec 16 12:46:17.246797 kubelet[2663]: E1216 12:46:17.246735 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:46:18.052916 containerd[1496]: time="2025-12-16T12:46:18.052877825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:46:18.245961 kubelet[2663]: E1216 12:46:18.245923 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:18.265089 containerd[1496]: time="2025-12-16T12:46:18.264911237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:18.266105 containerd[1496]: time="2025-12-16T12:46:18.266021686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:46:18.266312 containerd[1496]: time="2025-12-16T12:46:18.266088167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:46:18.267079 kubelet[2663]: E1216 12:46:18.267044 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:18.267151 kubelet[2663]: E1216 12:46:18.267092 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:18.267258 kubelet[2663]: E1216 12:46:18.267216 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e429b7d76e594d25b7b642aec08c2a17,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlzs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-955ff858c-k74cv_calico-system(d3573843-84a0-4e96-b493-87073cbb0cd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:18.270524 containerd[1496]: time="2025-12-16T12:46:18.270495362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:46:18.485329 containerd[1496]: time="2025-12-16T12:46:18.485283956Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:18.486322 containerd[1496]: time="2025-12-16T12:46:18.486286804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:46:18.486410 containerd[1496]: time="2025-12-16T12:46:18.486372525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:46:18.486549 kubelet[2663]: E1216 12:46:18.486514 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:18.486620 kubelet[2663]: E1216 12:46:18.486564 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:18.486718 kubelet[2663]: E1216 12:46:18.486681 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlzs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-955ff858c-k74cv_calico-system(d3573843-84a0-4e96-b493-87073cbb0cd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:18.488242 kubelet[2663]: E1216 12:46:18.488180 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-955ff858c-k74cv" podUID="d3573843-84a0-4e96-b493-87073cbb0cd2" Dec 16 12:46:21.925844 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:35098.service - OpenSSH per-connection server daemon (10.0.0.1:35098). Dec 16 12:46:21.998346 sshd[4855]: Accepted publickey for core from 10.0.0.1 port 35098 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:22.000214 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:22.007356 systemd-logind[1482]: New session 10 of user core. Dec 16 12:46:22.017735 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:46:22.208597 sshd[4861]: Connection closed by 10.0.0.1 port 35098 Dec 16 12:46:22.209694 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:22.218974 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:35098.service: Deactivated successfully. Dec 16 12:46:22.221681 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:46:22.224026 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:46:22.228064 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:35114.service - OpenSSH per-connection server daemon (10.0.0.1:35114). Dec 16 12:46:22.228691 systemd-logind[1482]: Removed session 10. Dec 16 12:46:22.300353 sshd[4877]: Accepted publickey for core from 10.0.0.1 port 35114 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:22.301572 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:22.305720 systemd-logind[1482]: New session 11 of user core. Dec 16 12:46:22.314734 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:46:22.511473 sshd[4880]: Connection closed by 10.0.0.1 port 35114 Dec 16 12:46:22.511847 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:22.523572 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:35114.service: Deactivated successfully. Dec 16 12:46:22.527255 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:46:22.528392 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:46:22.532052 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:35122.service - OpenSSH per-connection server daemon (10.0.0.1:35122). Dec 16 12:46:22.533565 systemd-logind[1482]: Removed session 11. Dec 16 12:46:22.603149 sshd[4891]: Accepted publickey for core from 10.0.0.1 port 35122 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:22.605947 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:22.612226 systemd-logind[1482]: New session 12 of user core. Dec 16 12:46:22.622703 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:46:22.750928 sshd[4894]: Connection closed by 10.0.0.1 port 35122 Dec 16 12:46:22.751288 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:22.755993 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:35122.service: Deactivated successfully. Dec 16 12:46:22.758692 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:46:22.762770 systemd-logind[1482]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:46:22.765007 systemd-logind[1482]: Removed session 12. Dec 16 12:46:25.053466 containerd[1496]: time="2025-12-16T12:46:25.053139660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:25.274077 containerd[1496]: time="2025-12-16T12:46:25.273964236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:25.274921 containerd[1496]: time="2025-12-16T12:46:25.274839762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:25.274921 containerd[1496]: time="2025-12-16T12:46:25.274891763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:46:25.275063 kubelet[2663]: E1216 12:46:25.275020 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:25.275371 kubelet[2663]: E1216 12:46:25.275075 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:25.275371 kubelet[2663]: E1216 12:46:25.275291 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6hqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddfc8d545-95q7h_calico-apiserver(4b03df5e-c87f-4925-bce5-1bc694fc45a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:25.275518 containerd[1496]: time="2025-12-16T12:46:25.275485047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:46:25.276653 kubelet[2663]: E1216 12:46:25.276610 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" podUID="4b03df5e-c87f-4925-bce5-1bc694fc45a1" Dec 16 12:46:25.485004 containerd[1496]: time="2025-12-16T12:46:25.484954742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:25.486109 containerd[1496]: time="2025-12-16T12:46:25.486069790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:46:25.486194 containerd[1496]: time="2025-12-16T12:46:25.486159470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:46:25.486319 kubelet[2663]: E1216 12:46:25.486274 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:25.486360 kubelet[2663]: E1216 12:46:25.486326 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:25.486899 kubelet[2663]: E1216 12:46:25.486498 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5djxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76c76958cc-4g9pn_calico-system(88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:25.488071 kubelet[2663]: E1216 12:46:25.488028 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" podUID="88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9" Dec 16 12:46:26.037642 containerd[1496]: time="2025-12-16T12:46:26.037599842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:46:26.309483 containerd[1496]: time="2025-12-16T12:46:26.309358996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:26.310536 containerd[1496]: time="2025-12-16T12:46:26.310498524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:46:26.310628 containerd[1496]: time="2025-12-16T12:46:26.310519284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:46:26.310732 kubelet[2663]: E1216 12:46:26.310695 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:26.310925 kubelet[2663]: E1216 12:46:26.310745 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:26.310925 kubelet[2663]: E1216 12:46:26.310884 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6qtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-x98rg_calico-system(1f1fea87-d58f-4ba7-813c-87eb72bdb004): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:26.312128 kubelet[2663]: E1216 12:46:26.312081 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x98rg" podUID="1f1fea87-d58f-4ba7-813c-87eb72bdb004" Dec 16 12:46:27.764720 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:35130.service - OpenSSH per-connection server daemon (10.0.0.1:35130). Dec 16 12:46:27.816605 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 35130 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:27.818476 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:27.822512 systemd-logind[1482]: New session 13 of user core. Dec 16 12:46:27.836653 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:46:27.967417 sshd[4918]: Connection closed by 10.0.0.1 port 35130 Dec 16 12:46:27.968221 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:27.979696 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:35130.service: Deactivated successfully. Dec 16 12:46:27.981680 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:46:27.983381 systemd-logind[1482]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:46:27.985862 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:35142.service - OpenSSH per-connection server daemon (10.0.0.1:35142). Dec 16 12:46:27.987131 systemd-logind[1482]: Removed session 13. Dec 16 12:46:28.039797 containerd[1496]: time="2025-12-16T12:46:28.039689811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:46:28.060688 sshd[4932]: Accepted publickey for core from 10.0.0.1 port 35142 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:28.062049 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:28.066289 systemd-logind[1482]: New session 14 of user core. Dec 16 12:46:28.070618 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:46:28.260515 containerd[1496]: time="2025-12-16T12:46:28.260431769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:28.265208 containerd[1496]: time="2025-12-16T12:46:28.265161722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:46:28.265290 containerd[1496]: time="2025-12-16T12:46:28.265164642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:46:28.265502 kubelet[2663]: E1216 12:46:28.265442 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:28.266024 kubelet[2663]: E1216 12:46:28.265822 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:28.266024 kubelet[2663]: E1216 12:46:28.265965 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x898d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-575rn_calico-system(bd9dfc7d-59c3-4082-b547-c4b54eeb1dee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:28.268182 containerd[1496]: time="2025-12-16T12:46:28.268151542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:46:28.278929 sshd[4935]: Connection closed by 10.0.0.1 port 35142 Dec 16 12:46:28.280388 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:28.288582 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:35142.service: Deactivated successfully. Dec 16 12:46:28.290426 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:46:28.291286 systemd-logind[1482]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:46:28.292829 systemd-logind[1482]: Removed session 14. Dec 16 12:46:28.294169 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:35154.service - OpenSSH per-connection server daemon (10.0.0.1:35154). Dec 16 12:46:28.360314 sshd[4947]: Accepted publickey for core from 10.0.0.1 port 35154 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:28.361685 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:28.366438 systemd-logind[1482]: New session 15 of user core. Dec 16 12:46:28.373608 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:46:28.492205 containerd[1496]: time="2025-12-16T12:46:28.492138562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:28.495084 containerd[1496]: time="2025-12-16T12:46:28.495046702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:46:28.495216 containerd[1496]: time="2025-12-16T12:46:28.495117983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:46:28.495303 kubelet[2663]: E1216 12:46:28.495249 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:28.495402 kubelet[2663]: E1216 12:46:28.495309 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:28.495823 kubelet[2663]: E1216 12:46:28.495436 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x898d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-575rn_calico-system(bd9dfc7d-59c3-4082-b547-c4b54eeb1dee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:28.496651 kubelet[2663]: E1216 12:46:28.496602 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:46:29.028854 sshd[4950]: Connection closed by 10.0.0.1 port 35154 Dec 16 12:46:29.029270 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:29.044380 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:35154.service: Deactivated successfully. Dec 16 12:46:29.047380 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:46:29.049926 systemd-logind[1482]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:46:29.055724 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:35158.service - OpenSSH per-connection server daemon (10.0.0.1:35158). Dec 16 12:46:29.058159 systemd-logind[1482]: Removed session 15. Dec 16 12:46:29.117198 sshd[4971]: Accepted publickey for core from 10.0.0.1 port 35158 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:29.118677 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:29.123636 systemd-logind[1482]: New session 16 of user core. Dec 16 12:46:29.137673 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:46:29.463641 sshd[4974]: Connection closed by 10.0.0.1 port 35158 Dec 16 12:46:29.464221 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:29.476057 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:35158.service: Deactivated successfully. Dec 16 12:46:29.478022 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:46:29.479842 systemd-logind[1482]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:46:29.485143 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:35172.service - OpenSSH per-connection server daemon (10.0.0.1:35172). Dec 16 12:46:29.488865 systemd-logind[1482]: Removed session 16. Dec 16 12:46:29.542581 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 35172 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:29.544008 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:29.548533 systemd-logind[1482]: New session 17 of user core. Dec 16 12:46:29.558673 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:46:29.716325 sshd[4989]: Connection closed by 10.0.0.1 port 35172 Dec 16 12:46:29.716410 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:29.723038 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:35172.service: Deactivated successfully. Dec 16 12:46:29.725493 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:46:29.726384 systemd-logind[1482]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:46:29.728800 systemd-logind[1482]: Removed session 17. Dec 16 12:46:30.042474 kubelet[2663]: E1216 12:46:30.041356 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-955ff858c-k74cv" podUID="d3573843-84a0-4e96-b493-87073cbb0cd2" Dec 16 12:46:31.038137 containerd[1496]: time="2025-12-16T12:46:31.037855795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:31.263369 containerd[1496]: time="2025-12-16T12:46:31.263322817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:31.264543 containerd[1496]: time="2025-12-16T12:46:31.264420624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:31.264543 containerd[1496]: time="2025-12-16T12:46:31.264531345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:46:31.265084 kubelet[2663]: E1216 12:46:31.265040 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:31.265389 kubelet[2663]: E1216 12:46:31.265095 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:31.265389 kubelet[2663]: E1216 12:46:31.265241 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgwdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ddfc8d545-2gnqr_calico-apiserver(b7cd9aef-f007-4e21-92a2-b7da7e34e076): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:31.266597 kubelet[2663]: E1216 12:46:31.266540 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" podUID="b7cd9aef-f007-4e21-92a2-b7da7e34e076" Dec 16 12:46:34.730091 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:43746.service - OpenSSH per-connection server daemon (10.0.0.1:43746). Dec 16 12:46:34.793360 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 43746 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:34.795031 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:34.803483 systemd-logind[1482]: New session 18 of user core. Dec 16 12:46:34.809761 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:46:34.923906 sshd[5009]: Connection closed by 10.0.0.1 port 43746 Dec 16 12:46:34.924264 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:34.927811 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:43746.service: Deactivated successfully. Dec 16 12:46:34.931224 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:46:34.932415 systemd-logind[1482]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:46:34.933962 systemd-logind[1482]: Removed session 18. Dec 16 12:46:36.278646 kubelet[2663]: E1216 12:46:36.278610 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:37.049904 kubelet[2663]: E1216 12:46:37.049852 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76c76958cc-4g9pn" podUID="88af0c32-5f2c-41d6-ac54-1c0ec2b9ceb9" Dec 16 12:46:37.050458 kubelet[2663]: E1216 12:46:37.050333 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-x98rg" podUID="1f1fea87-d58f-4ba7-813c-87eb72bdb004" Dec 16 12:46:39.937337 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:43750.service - OpenSSH per-connection server daemon (10.0.0.1:43750). Dec 16 12:46:39.995294 sshd[5052]: Accepted publickey for core from 10.0.0.1 port 43750 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:39.996719 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:40.001299 systemd-logind[1482]: New session 19 of user core. Dec 16 12:46:40.007659 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:46:40.129964 sshd[5055]: Connection closed by 10.0.0.1 port 43750 Dec 16 12:46:40.130521 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:40.134095 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:43750.service: Deactivated successfully. Dec 16 12:46:40.135962 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:46:40.136814 systemd-logind[1482]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:46:40.138129 systemd-logind[1482]: Removed session 19. Dec 16 12:46:41.037302 kubelet[2663]: E1216 12:46:41.037247 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-95q7h" podUID="4b03df5e-c87f-4925-bce5-1bc694fc45a1" Dec 16 12:46:42.039139 kubelet[2663]: E1216 12:46:42.038669 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-575rn" podUID="bd9dfc7d-59c3-4082-b547-c4b54eeb1dee" Dec 16 12:46:45.038948 kubelet[2663]: E1216 12:46:45.038839 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:45.040004 containerd[1496]: time="2025-12-16T12:46:45.039962863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:46:45.147344 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:54784.service - OpenSSH per-connection server daemon (10.0.0.1:54784). Dec 16 12:46:45.215290 sshd[5071]: Accepted publickey for core from 10.0.0.1 port 54784 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:46:45.216860 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:46:45.222292 systemd-logind[1482]: New session 20 of user core. Dec 16 12:46:45.231649 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:46:45.281857 containerd[1496]: time="2025-12-16T12:46:45.281797957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:45.283326 containerd[1496]: time="2025-12-16T12:46:45.283280732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:46:45.283421 containerd[1496]: time="2025-12-16T12:46:45.283364733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:46:45.283707 kubelet[2663]: E1216 12:46:45.283658 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:45.283772 kubelet[2663]: E1216 12:46:45.283720 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:45.283891 kubelet[2663]: E1216 12:46:45.283846 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e429b7d76e594d25b7b642aec08c2a17,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlzs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-955ff858c-k74cv_calico-system(d3573843-84a0-4e96-b493-87073cbb0cd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:45.286985 containerd[1496]: time="2025-12-16T12:46:45.286944290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:46:45.374891 sshd[5074]: Connection closed by 10.0.0.1 port 54784 Dec 16 12:46:45.375817 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:45.379901 systemd-logind[1482]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:46:45.380226 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:54784.service: Deactivated successfully. Dec 16 12:46:45.383249 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:46:45.385813 systemd-logind[1482]: Removed session 20. Dec 16 12:46:45.502922 containerd[1496]: time="2025-12-16T12:46:45.502864997Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:45.503913 containerd[1496]: time="2025-12-16T12:46:45.503869127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:46:45.504081 containerd[1496]: time="2025-12-16T12:46:45.503900207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:46:45.504168 kubelet[2663]: E1216 12:46:45.504113 2663 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:45.504221 kubelet[2663]: E1216 12:46:45.504196 2663 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:45.504365 kubelet[2663]: E1216 12:46:45.504327 2663 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlzs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-955ff858c-k74cv_calico-system(d3573843-84a0-4e96-b493-87073cbb0cd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:45.505879 kubelet[2663]: E1216 12:46:45.505794 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-955ff858c-k74cv" podUID="d3573843-84a0-4e96-b493-87073cbb0cd2" Dec 16 12:46:46.037026 kubelet[2663]: E1216 12:46:46.036946 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:46:46.037903 kubelet[2663]: E1216 12:46:46.037863 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ddfc8d545-2gnqr" podUID="b7cd9aef-f007-4e21-92a2-b7da7e34e076"