Dec 12 17:34:47.781594 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:34:47.781615 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:34:47.781624 kernel: KASLR enabled Dec 12 17:34:47.781630 kernel: efi: EFI v2.7 by EDK II Dec 12 17:34:47.781635 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 12 17:34:47.781641 kernel: random: crng init done Dec 12 17:34:47.781648 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 12 17:34:47.781654 kernel: secureboot: Secure boot enabled Dec 12 17:34:47.781660 kernel: ACPI: Early table checksum verification disabled Dec 12 17:34:47.781667 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 12 17:34:47.781673 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:34:47.781679 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:47.781685 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:47.781691 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:47.781698 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:47.781705 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:47.781711 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:47.781717 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:47.781723 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:47.781729 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:34:47.781735 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:34:47.781741 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:34:47.781747 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:34:47.781753 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 12 17:34:47.781759 kernel: Zone ranges: Dec 12 17:34:47.781766 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:34:47.781772 kernel: DMA32 empty Dec 12 17:34:47.781778 kernel: Normal empty Dec 12 17:34:47.781784 kernel: Device empty Dec 12 17:34:47.781790 kernel: Movable zone start for each node Dec 12 17:34:47.781796 kernel: Early memory node ranges Dec 12 17:34:47.781802 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 12 17:34:47.781808 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 12 17:34:47.781814 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 12 17:34:47.781820 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 12 17:34:47.781826 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 12 17:34:47.781832 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 12 17:34:47.781845 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 12 17:34:47.781852 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 12 17:34:47.781858 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:34:47.781867 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:34:47.781873 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:34:47.781880 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 12 17:34:47.781889 kernel: psci: probing for conduit method from ACPI. Dec 12 17:34:47.781898 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:34:47.781905 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:34:47.781911 kernel: psci: Trusted OS migration not required Dec 12 17:34:47.781917 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:34:47.781924 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:34:47.781930 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:34:47.781937 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:34:47.781943 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:34:47.781950 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:34:47.781958 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:34:47.781964 kernel: CPU features: detected: Spectre-v4 Dec 12 17:34:47.781970 kernel: CPU features: detected: Spectre-BHB Dec 12 17:34:47.781977 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:34:47.781984 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:34:47.781990 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:34:47.781996 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:34:47.782003 kernel: alternatives: applying boot alternatives Dec 12 17:34:47.782010 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:34:47.782017 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:34:47.782023 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:34:47.782031 kernel: Fallback order for Node 0: 0 Dec 12 17:34:47.782038 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:34:47.782044 kernel: Policy zone: DMA Dec 12 17:34:47.782050 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:34:47.782057 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:34:47.782063 kernel: software IO TLB: area num 4. Dec 12 17:34:47.782069 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:34:47.782076 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 12 17:34:47.782082 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:34:47.782088 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:34:47.782095 kernel: rcu: RCU event tracing is enabled. Dec 12 17:34:47.782102 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:34:47.782110 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:34:47.782116 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:34:47.782123 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:34:47.782129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:34:47.782136 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:34:47.782143 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:34:47.782149 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:34:47.782155 kernel: GICv3: 256 SPIs implemented Dec 12 17:34:47.782161 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:34:47.782168 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:34:47.782174 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:34:47.782180 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:34:47.782188 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:34:47.782195 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:34:47.782202 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:34:47.782208 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:34:47.782215 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:34:47.782222 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:34:47.782229 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:34:47.782235 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:34:47.782242 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:34:47.782248 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:34:47.782255 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:34:47.782263 kernel: arm-pv: using stolen time PV Dec 12 17:34:47.782269 kernel: Console: colour dummy device 80x25 Dec 12 17:34:47.782276 kernel: ACPI: Core revision 20240827 Dec 12 17:34:47.782283 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:34:47.782289 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:34:47.782296 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:34:47.782302 kernel: landlock: Up and running. Dec 12 17:34:47.782315 kernel: SELinux: Initializing. Dec 12 17:34:47.782322 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:34:47.782330 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:34:47.782337 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:34:47.782344 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:34:47.782350 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:34:47.782357 kernel: Remapping and enabling EFI services. Dec 12 17:34:47.782364 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:34:47.782370 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:34:47.782376 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:34:47.782383 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:34:47.782391 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:34:47.782403 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:34:47.782410 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:34:47.782418 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:34:47.782425 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:34:47.782432 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:34:47.782439 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:34:47.782446 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:34:47.782454 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:34:47.782461 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:34:47.782503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:34:47.782511 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:34:47.782518 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:34:47.782525 kernel: SMP: Total of 4 processors activated. Dec 12 17:34:47.782533 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:34:47.782540 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:34:47.782546 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:34:47.782554 kernel: CPU features: detected: Common not Private translations Dec 12 17:34:47.782563 kernel: CPU features: detected: CRC32 instructions Dec 12 17:34:47.782570 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:34:47.782576 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:34:47.782583 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:34:47.782590 kernel: CPU features: detected: Privileged Access Never Dec 12 17:34:47.782597 kernel: CPU features: detected: RAS Extension Support Dec 12 17:34:47.782605 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:34:47.782612 kernel: alternatives: applying system-wide alternatives Dec 12 17:34:47.782618 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:34:47.782627 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Dec 12 17:34:47.782634 kernel: devtmpfs: initialized Dec 12 17:34:47.782641 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:34:47.782648 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:34:47.782655 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:34:47.782662 kernel: 0 pages in range for non-PLT usage Dec 12 17:34:47.782669 kernel: 508400 pages in range for PLT usage Dec 12 17:34:47.782675 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:34:47.782682 kernel: SMBIOS 3.0.0 present. Dec 12 17:34:47.782691 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:34:47.782698 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:34:47.782704 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:34:47.782711 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:34:47.782718 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:34:47.782725 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:34:47.782732 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:34:47.782739 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Dec 12 17:34:47.782746 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:34:47.782754 kernel: cpuidle: using governor menu Dec 12 17:34:47.782761 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:34:47.782768 kernel: ASID allocator initialised with 32768 entries Dec 12 17:34:47.782775 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:34:47.782782 kernel: Serial: AMBA PL011 UART driver Dec 12 17:34:47.782788 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:34:47.782795 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:34:47.782802 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:34:47.782809 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:34:47.782817 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:34:47.782824 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:34:47.782831 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:34:47.782838 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:34:47.782845 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:34:47.782851 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:34:47.782858 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:34:47.782865 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:34:47.782872 kernel: ACPI: Interpreter enabled Dec 12 17:34:47.782880 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:34:47.782887 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:34:47.782894 kernel: ACPI: CPU0 has been hot-added Dec 12 17:34:47.782900 kernel: ACPI: CPU1 has been hot-added Dec 12 17:34:47.782907 kernel: ACPI: CPU2 has been hot-added Dec 12 17:34:47.782914 kernel: ACPI: CPU3 has been hot-added Dec 12 17:34:47.782921 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:34:47.782928 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:34:47.782935 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:34:47.783070 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:34:47.783136 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:34:47.783195 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:34:47.783254 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:34:47.783320 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:34:47.783330 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:34:47.783337 kernel: PCI host bridge to bus 0000:00 Dec 12 17:34:47.783406 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:34:47.783460 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:34:47.783541 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:34:47.783826 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:34:47.783924 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:34:47.783997 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:34:47.784062 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:34:47.784122 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:34:47.784183 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:34:47.784243 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:34:47.784304 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:34:47.784381 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:34:47.784451 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:34:47.784529 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:34:47.784589 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:34:47.784599 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:34:47.784606 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:34:47.784613 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:34:47.784620 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:34:47.784628 kernel: iommu: Default domain type: Translated Dec 12 17:34:47.784635 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:34:47.784642 kernel: efivars: Registered efivars operations Dec 12 17:34:47.784651 kernel: vgaarb: loaded Dec 12 17:34:47.784658 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:34:47.784665 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:34:47.784672 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:34:47.784679 kernel: pnp: PnP ACPI init Dec 12 17:34:47.784751 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:34:47.784761 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:34:47.784768 kernel: NET: Registered PF_INET protocol family Dec 12 17:34:47.784777 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:34:47.784784 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:34:47.784791 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:34:47.784799 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:34:47.784805 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:34:47.784812 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:34:47.784820 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:34:47.784827 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:34:47.784834 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:34:47.784842 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:34:47.784849 kernel: kvm [1]: HYP mode not available Dec 12 17:34:47.784855 kernel: Initialise system trusted keyrings Dec 12 17:34:47.784862 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:34:47.784869 kernel: Key type asymmetric registered Dec 12 17:34:47.784876 kernel: Asymmetric key parser 'x509' registered Dec 12 17:34:47.784883 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:34:47.784890 kernel: io scheduler mq-deadline registered Dec 12 17:34:47.784897 kernel: io scheduler kyber registered Dec 12 17:34:47.784905 kernel: io scheduler bfq registered Dec 12 17:34:47.784912 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:34:47.784919 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:34:47.784926 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:34:47.784986 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:34:47.784995 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:34:47.785002 kernel: thunder_xcv, ver 1.0 Dec 12 17:34:47.785009 kernel: thunder_bgx, ver 1.0 Dec 12 17:34:47.785016 kernel: nicpf, ver 1.0 Dec 12 17:34:47.785024 kernel: nicvf, ver 1.0 Dec 12 17:34:47.785091 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:34:47.785147 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:34:47 UTC (1765560887) Dec 12 17:34:47.785156 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:34:47.785163 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:34:47.785171 kernel: watchdog: NMI not fully supported Dec 12 17:34:47.785178 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:34:47.785184 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:34:47.785193 kernel: Segment Routing with IPv6 Dec 12 17:34:47.785200 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:34:47.785207 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:34:47.785214 kernel: Key type dns_resolver registered Dec 12 17:34:47.785220 kernel: registered taskstats version 1 Dec 12 17:34:47.785227 kernel: Loading compiled-in X.509 certificates Dec 12 17:34:47.785234 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:34:47.785241 kernel: Demotion targets for Node 0: null Dec 12 17:34:47.785248 kernel: Key type .fscrypt registered Dec 12 17:34:47.785256 kernel: Key type fscrypt-provisioning registered Dec 12 17:34:47.785263 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:34:47.785270 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:34:47.785277 kernel: ima: No architecture policies found Dec 12 17:34:47.785283 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:34:47.785290 kernel: clk: Disabling unused clocks Dec 12 17:34:47.785297 kernel: PM: genpd: Disabling unused power domains Dec 12 17:34:47.785304 kernel: Warning: unable to open an initial console. Dec 12 17:34:47.785320 kernel: Freeing unused kernel memory: 39552K Dec 12 17:34:47.785329 kernel: Run /init as init process Dec 12 17:34:47.785336 kernel: with arguments: Dec 12 17:34:47.785343 kernel: /init Dec 12 17:34:47.785349 kernel: with environment: Dec 12 17:34:47.785356 kernel: HOME=/ Dec 12 17:34:47.785363 kernel: TERM=linux Dec 12 17:34:47.785371 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:34:47.785381 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:34:47.785390 systemd[1]: Detected virtualization kvm. Dec 12 17:34:47.785397 systemd[1]: Detected architecture arm64. Dec 12 17:34:47.785404 systemd[1]: Running in initrd. Dec 12 17:34:47.785412 systemd[1]: No hostname configured, using default hostname. Dec 12 17:34:47.785419 systemd[1]: Hostname set to . Dec 12 17:34:47.785426 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:34:47.785434 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:34:47.785441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:34:47.785450 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:34:47.785458 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:34:47.785474 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:34:47.785482 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:34:47.785494 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:34:47.785504 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:34:47.785515 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:34:47.785522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:34:47.785534 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:34:47.785542 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:34:47.785549 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:34:47.785556 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:34:47.785564 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:34:47.785578 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:34:47.785586 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:34:47.785595 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:34:47.785603 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:34:47.785611 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:34:47.785618 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:34:47.785626 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:34:47.785634 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:34:47.785641 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:34:47.785649 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:34:47.785657 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:34:47.785666 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:34:47.785674 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:34:47.785682 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:34:47.785691 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:34:47.785699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:34:47.785706 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:34:47.785716 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:34:47.785724 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:34:47.785732 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:34:47.785763 systemd-journald[246]: Collecting audit messages is disabled. Dec 12 17:34:47.785788 systemd-journald[246]: Journal started Dec 12 17:34:47.785806 systemd-journald[246]: Runtime Journal (/run/log/journal/4f2a12b76c10475391ef9fa463eaa5d5) is 6M, max 48.5M, 42.4M free. Dec 12 17:34:47.789595 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:34:47.774754 systemd-modules-load[249]: Inserted module 'overlay' Dec 12 17:34:47.791409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:47.794481 kernel: Bridge firewalling registered Dec 12 17:34:47.794518 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:34:47.794454 systemd-modules-load[249]: Inserted module 'br_netfilter' Dec 12 17:34:47.799511 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:34:47.800727 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:34:47.805215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:34:47.807197 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:34:47.810144 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:34:47.822126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:34:47.830547 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:34:47.832602 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:34:47.834465 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:34:47.836325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:34:47.840118 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:34:47.842234 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:34:47.844378 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:34:47.873597 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:34:47.888490 systemd-resolved[290]: Positive Trust Anchors: Dec 12 17:34:47.888508 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:34:47.888540 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:34:47.894231 systemd-resolved[290]: Defaulting to hostname 'linux'. Dec 12 17:34:47.895286 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:34:47.899192 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:34:47.961505 kernel: SCSI subsystem initialized Dec 12 17:34:47.966577 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:34:47.974519 kernel: iscsi: registered transport (tcp) Dec 12 17:34:47.987818 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:34:47.987892 kernel: QLogic iSCSI HBA Driver Dec 12 17:34:48.007823 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:34:48.035574 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:34:48.038132 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:34:48.094255 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:34:48.096628 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:34:48.170504 kernel: raid6: neonx8 gen() 15796 MB/s Dec 12 17:34:48.187512 kernel: raid6: neonx4 gen() 15499 MB/s Dec 12 17:34:48.204499 kernel: raid6: neonx2 gen() 12979 MB/s Dec 12 17:34:48.221510 kernel: raid6: neonx1 gen() 10445 MB/s Dec 12 17:34:48.238501 kernel: raid6: int64x8 gen() 6871 MB/s Dec 12 17:34:48.255517 kernel: raid6: int64x4 gen() 7190 MB/s Dec 12 17:34:48.272531 kernel: raid6: int64x2 gen() 6073 MB/s Dec 12 17:34:48.289704 kernel: raid6: int64x1 gen() 5037 MB/s Dec 12 17:34:48.289775 kernel: raid6: using algorithm neonx8 gen() 15796 MB/s Dec 12 17:34:48.307589 kernel: raid6: .... xor() 11726 MB/s, rmw enabled Dec 12 17:34:48.307639 kernel: raid6: using neon recovery algorithm Dec 12 17:34:48.314830 kernel: xor: measuring software checksum speed Dec 12 17:34:48.314894 kernel: 8regs : 21590 MB/sec Dec 12 17:34:48.315490 kernel: 32regs : 21664 MB/sec Dec 12 17:34:48.316595 kernel: arm64_neon : 25114 MB/sec Dec 12 17:34:48.316610 kernel: xor: using function: arm64_neon (25114 MB/sec) Dec 12 17:34:48.375522 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:34:48.388918 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:34:48.393509 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:34:48.426788 systemd-udevd[504]: Using default interface naming scheme 'v255'. Dec 12 17:34:48.432484 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:34:48.435223 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:34:48.461164 dracut-pre-trigger[516]: rd.md=0: removing MD RAID activation Dec 12 17:34:48.488971 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:34:48.491408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:34:48.552539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:34:48.554724 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:34:48.621980 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:34:48.622184 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:34:48.627169 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:34:48.627292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:48.631522 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:34:48.634366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:34:48.644783 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:34:48.644806 kernel: GPT:9289727 != 19775487 Dec 12 17:34:48.644816 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:34:48.644825 kernel: GPT:9289727 != 19775487 Dec 12 17:34:48.644833 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:34:48.644841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:34:48.672841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:48.681172 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:34:48.682614 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:34:48.691433 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:34:48.697611 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:34:48.698722 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:34:48.708160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:34:48.709424 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:34:48.711552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:34:48.713496 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:34:48.716231 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:34:48.718092 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:34:48.743581 disk-uuid[597]: Primary Header is updated. Dec 12 17:34:48.743581 disk-uuid[597]: Secondary Entries is updated. Dec 12 17:34:48.743581 disk-uuid[597]: Secondary Header is updated. Dec 12 17:34:48.750491 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:34:48.749443 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:34:49.757535 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:34:49.760579 disk-uuid[603]: The operation has completed successfully. Dec 12 17:34:49.783802 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:34:49.783899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:34:49.807925 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:34:49.831484 sh[616]: Success Dec 12 17:34:49.843833 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:34:49.843882 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:34:49.844865 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:34:49.853486 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:34:49.881980 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:34:49.887538 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:34:49.905657 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:34:49.912606 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (628) Dec 12 17:34:49.915076 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:34:49.915119 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:34:49.919501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:34:49.919554 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:34:49.920441 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:34:49.921709 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:34:49.923077 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:34:49.923836 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:34:49.925389 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:34:49.948499 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (659) Dec 12 17:34:49.950840 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:49.950872 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:34:49.953504 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:34:49.953535 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:34:49.957503 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:49.960561 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:34:49.962433 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:34:50.035116 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:34:50.038108 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:34:50.065328 ignition[706]: Ignition 2.22.0 Dec 12 17:34:50.065341 ignition[706]: Stage: fetch-offline Dec 12 17:34:50.065375 ignition[706]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:50.065383 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:50.065461 ignition[706]: parsed url from cmdline: "" Dec 12 17:34:50.065464 ignition[706]: no config URL provided Dec 12 17:34:50.065481 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:34:50.065488 ignition[706]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:34:50.065508 ignition[706]: op(1): [started] loading QEMU firmware config module Dec 12 17:34:50.065512 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:34:50.074750 systemd-networkd[808]: lo: Link UP Dec 12 17:34:50.074763 systemd-networkd[808]: lo: Gained carrier Dec 12 17:34:50.075660 systemd-networkd[808]: Enumeration completed Dec 12 17:34:50.076248 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:34:50.077380 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:34:50.077384 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:34:50.078543 systemd[1]: Reached target network.target - Network. Dec 12 17:34:50.080006 systemd-networkd[808]: eth0: Link UP Dec 12 17:34:50.080118 systemd-networkd[808]: eth0: Gained carrier Dec 12 17:34:50.080128 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:34:50.088760 ignition[706]: op(1): [finished] loading QEMU firmware config module Dec 12 17:34:50.088782 ignition[706]: QEMU firmware config was not found. Ignoring... Dec 12 17:34:50.100525 systemd-networkd[808]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:34:50.135483 ignition[706]: parsing config with SHA512: fb832e430d7165d595f5596d63c663507a582e52f9cbcc964706f005d142524b971a21988d0f652a5884f973bb96971a143877bc2152bffe047d4295134a1572 Dec 12 17:34:50.141484 unknown[706]: fetched base config from "system" Dec 12 17:34:50.141499 unknown[706]: fetched user config from "qemu" Dec 12 17:34:50.141973 ignition[706]: fetch-offline: fetch-offline passed Dec 12 17:34:50.144290 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:34:50.142032 ignition[706]: Ignition finished successfully Dec 12 17:34:50.145640 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:34:50.146392 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:34:50.183652 ignition[817]: Ignition 2.22.0 Dec 12 17:34:50.183669 ignition[817]: Stage: kargs Dec 12 17:34:50.183811 ignition[817]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:50.183821 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:50.187057 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:34:50.184587 ignition[817]: kargs: kargs passed Dec 12 17:34:50.184632 ignition[817]: Ignition finished successfully Dec 12 17:34:50.189313 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:34:50.220395 ignition[825]: Ignition 2.22.0 Dec 12 17:34:50.220411 ignition[825]: Stage: disks Dec 12 17:34:50.220558 ignition[825]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:50.223729 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:34:50.220567 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:50.224944 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:34:50.221274 ignition[825]: disks: disks passed Dec 12 17:34:50.226553 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:34:50.221329 ignition[825]: Ignition finished successfully Dec 12 17:34:50.228410 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:34:50.230221 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:34:50.231630 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:34:50.234166 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:34:50.262173 systemd-fsck[835]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:34:50.267342 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:34:50.269988 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:34:50.331516 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:34:50.332266 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:34:50.333547 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:34:50.335762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:34:50.337405 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:34:50.338366 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:34:50.338405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:34:50.338428 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:34:50.350975 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:34:50.353406 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:34:50.356516 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (843) Dec 12 17:34:50.358484 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:50.358509 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:34:50.361114 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:34:50.361130 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:34:50.362836 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:34:50.388944 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:34:50.392228 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:34:50.396361 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:34:50.400363 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:34:50.472001 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:34:50.474041 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:34:50.475809 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:34:50.494496 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:50.508539 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:34:50.521231 ignition[957]: INFO : Ignition 2.22.0 Dec 12 17:34:50.521231 ignition[957]: INFO : Stage: mount Dec 12 17:34:50.522835 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:50.522835 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:50.522835 ignition[957]: INFO : mount: mount passed Dec 12 17:34:50.522835 ignition[957]: INFO : Ignition finished successfully Dec 12 17:34:50.523881 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:34:50.526662 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:34:50.911984 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:34:50.913414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:34:50.947538 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (969) Dec 12 17:34:50.950734 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:34:50.950767 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:34:50.953945 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:34:50.953978 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:34:50.954752 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:34:50.985908 ignition[986]: INFO : Ignition 2.22.0 Dec 12 17:34:50.985908 ignition[986]: INFO : Stage: files Dec 12 17:34:50.985908 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:50.985908 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:50.991679 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:34:50.991679 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:34:50.991679 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:34:50.991679 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:34:50.991679 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:34:50.999059 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:34:50.991720 unknown[986]: wrote ssh authorized keys file for user: core Dec 12 17:34:51.001770 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:34:51.001770 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:34:51.058150 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:34:51.163373 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:34:51.163373 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:34:51.167622 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:34:51.167622 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:34:51.167622 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:34:51.167622 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:34:51.167622 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:34:51.167622 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:34:51.167622 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:34:51.180328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:34:51.180328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:34:51.180328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:34:51.180328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:34:51.180328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:34:51.180328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 12 17:34:51.277680 systemd-networkd[808]: eth0: Gained IPv6LL Dec 12 17:34:51.560458 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:34:51.856588 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:34:51.856588 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:34:51.860681 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:34:51.862549 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:34:51.862549 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:34:51.862549 ignition[986]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 17:34:51.862549 ignition[986]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:34:51.862549 ignition[986]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:34:51.862549 ignition[986]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 17:34:51.862549 ignition[986]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:34:51.875612 ignition[986]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:34:51.878893 ignition[986]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:34:51.881411 ignition[986]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:34:51.881411 ignition[986]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:34:51.881411 ignition[986]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:34:51.881411 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:34:51.881411 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:34:51.881411 ignition[986]: INFO : files: files passed Dec 12 17:34:51.881411 ignition[986]: INFO : Ignition finished successfully Dec 12 17:34:51.881951 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:34:51.886568 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:34:51.890611 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:34:51.904787 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:34:51.905066 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:34:51.908644 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:34:51.909876 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:34:51.909876 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:34:51.912968 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:34:51.914547 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:34:51.915823 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:34:51.918341 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:34:51.954538 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:34:51.955544 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:34:51.957625 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:34:51.961454 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:34:51.963614 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:34:51.964548 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:34:51.988011 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:34:51.991330 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:34:52.027881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:34:52.030181 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:34:52.031489 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:34:52.033250 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:34:52.033385 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:34:52.035673 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:34:52.037491 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:34:52.039120 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:34:52.041106 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:34:52.043076 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:34:52.045096 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:34:52.046954 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:34:52.048834 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:34:52.050883 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:34:52.052766 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:34:52.054619 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:34:52.056091 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:34:52.056231 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:34:52.058460 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:34:52.060704 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:34:52.062599 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:34:52.062684 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:34:52.064686 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:34:52.064804 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:34:52.067656 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:34:52.067840 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:34:52.069640 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:34:52.071372 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:34:52.074512 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:34:52.076350 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:34:52.078385 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:34:52.079934 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:34:52.080019 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:34:52.082508 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:34:52.082590 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:34:52.084051 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:34:52.084174 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:34:52.085829 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:34:52.085929 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:34:52.088242 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:34:52.089835 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:34:52.090938 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:34:52.091057 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:34:52.092730 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:34:52.092816 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:34:52.098017 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:34:52.108549 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:34:52.117163 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:34:52.123303 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:34:52.123628 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:34:52.128797 ignition[1041]: INFO : Ignition 2.22.0 Dec 12 17:34:52.128797 ignition[1041]: INFO : Stage: umount Dec 12 17:34:52.130375 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:34:52.130375 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:34:52.130375 ignition[1041]: INFO : umount: umount passed Dec 12 17:34:52.130375 ignition[1041]: INFO : Ignition finished successfully Dec 12 17:34:52.131233 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:34:52.131361 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:34:52.133548 systemd[1]: Stopped target network.target - Network. Dec 12 17:34:52.134826 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:34:52.134891 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:34:52.136404 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:34:52.136446 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:34:52.138134 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:34:52.138182 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:34:52.139709 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:34:52.139752 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:34:52.141289 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:34:52.141352 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:34:52.143185 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:34:52.144681 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:34:52.154160 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:34:52.154496 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:34:52.157575 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:34:52.157859 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:34:52.157894 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:34:52.161037 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:34:52.161261 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:34:52.161376 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:34:52.164958 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:34:52.165419 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:34:52.166674 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:34:52.166713 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:34:52.169462 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:34:52.170362 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:34:52.170421 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:34:52.172634 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:34:52.172680 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:34:52.175408 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:34:52.175511 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:34:52.177574 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:34:52.181365 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:34:52.191108 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:34:52.192522 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:34:52.193802 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:34:52.193841 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:34:52.195514 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:34:52.195544 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:34:52.197521 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:34:52.197570 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:34:52.200452 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:34:52.200513 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:34:52.203006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:34:52.203063 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:34:52.213096 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:34:52.214112 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:34:52.214167 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:34:52.217270 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:34:52.217326 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:34:52.220826 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:34:52.220868 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:34:52.224479 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:34:52.224525 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:34:52.227449 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:34:52.227631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:52.230856 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:34:52.231636 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:34:52.237057 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:34:52.237184 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:34:52.240794 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:34:52.243153 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:34:52.261396 systemd[1]: Switching root. Dec 12 17:34:52.290714 systemd-journald[246]: Journal stopped Dec 12 17:34:53.109168 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Dec 12 17:34:53.109222 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:34:53.109235 kernel: SELinux: policy capability open_perms=1 Dec 12 17:34:53.109245 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:34:53.109255 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:34:53.109267 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:34:53.109276 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:34:53.109297 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:34:53.109311 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:34:53.109325 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:34:53.109334 kernel: audit: type=1403 audit(1765560892.459:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:34:53.109348 systemd[1]: Successfully loaded SELinux policy in 47.447ms. Dec 12 17:34:53.109368 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.715ms. Dec 12 17:34:53.109379 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:34:53.109390 systemd[1]: Detected virtualization kvm. Dec 12 17:34:53.109400 systemd[1]: Detected architecture arm64. Dec 12 17:34:53.109411 systemd[1]: Detected first boot. Dec 12 17:34:53.109421 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:34:53.109431 zram_generator::config[1088]: No configuration found. Dec 12 17:34:53.109450 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:34:53.109460 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:34:53.109485 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:34:53.109496 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:34:53.109506 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:34:53.109518 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:34:53.109529 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:34:53.109539 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:34:53.109549 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:34:53.109560 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:34:53.109571 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:34:53.109581 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:34:53.109591 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:34:53.109601 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:34:53.109613 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:34:53.109623 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:34:53.109633 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:34:53.109644 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:34:53.109655 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:34:53.109666 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:34:53.109676 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:34:53.109687 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:34:53.109698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:34:53.109708 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:34:53.109718 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:34:53.109729 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:34:53.109739 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:34:53.109749 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:34:53.109758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:34:53.109769 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:34:53.109779 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:34:53.109791 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:34:53.109801 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:34:53.109810 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:34:53.109821 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:34:53.109831 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:34:53.109842 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:34:53.109852 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:34:53.109863 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:34:53.109874 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:34:53.109886 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:34:53.109896 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:34:53.109907 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:34:53.109917 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:34:53.109928 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:34:53.109938 systemd[1]: Reached target machines.target - Containers. Dec 12 17:34:53.109948 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:34:53.109959 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:34:53.109971 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:34:53.109981 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:34:53.109992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:34:53.110002 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:34:53.110012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:34:53.110023 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:34:53.110034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:34:53.110044 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:34:53.110057 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:34:53.110068 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:34:53.110078 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:34:53.110089 kernel: fuse: init (API version 7.41) Dec 12 17:34:53.110099 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:34:53.110110 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:34:53.110121 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:34:53.110131 kernel: ACPI: bus type drm_connector registered Dec 12 17:34:53.110141 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:34:53.110153 kernel: loop: module loaded Dec 12 17:34:53.110162 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:34:53.110173 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:34:53.110184 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:34:53.110195 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:34:53.110207 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:34:53.110217 systemd[1]: Stopped verity-setup.service. Dec 12 17:34:53.110252 systemd-journald[1163]: Collecting audit messages is disabled. Dec 12 17:34:53.110274 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:34:53.110292 systemd-journald[1163]: Journal started Dec 12 17:34:53.110316 systemd-journald[1163]: Runtime Journal (/run/log/journal/4f2a12b76c10475391ef9fa463eaa5d5) is 6M, max 48.5M, 42.4M free. Dec 12 17:34:52.856412 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:34:52.878691 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:34:52.879103 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:34:53.113488 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:34:53.114086 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:34:53.115378 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:34:53.116515 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:34:53.117673 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:34:53.118848 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:34:53.120192 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:34:53.123514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:34:53.124899 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:34:53.125077 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:34:53.126656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:34:53.126953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:34:53.129739 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:34:53.129899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:34:53.131202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:34:53.131392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:34:53.134001 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:34:53.134205 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:34:53.135631 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:34:53.135800 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:34:53.138539 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:34:53.139954 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:34:53.141763 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:34:53.143393 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:34:53.156193 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:34:53.159046 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:34:53.161426 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:34:53.162559 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:34:53.162606 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:34:53.164667 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:34:53.172493 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:34:53.173715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:34:53.175002 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:34:53.177212 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:34:53.178740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:34:53.179977 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:34:53.182570 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:34:53.183651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:34:53.191675 systemd-journald[1163]: Time spent on flushing to /var/log/journal/4f2a12b76c10475391ef9fa463eaa5d5 is 24.264ms for 881 entries. Dec 12 17:34:53.191675 systemd-journald[1163]: System Journal (/var/log/journal/4f2a12b76c10475391ef9fa463eaa5d5) is 8M, max 195.6M, 187.6M free. Dec 12 17:34:53.229495 systemd-journald[1163]: Received client request to flush runtime journal. Dec 12 17:34:53.229566 kernel: loop0: detected capacity change from 0 to 119840 Dec 12 17:34:53.188718 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:34:53.194705 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:34:53.232485 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:34:53.201517 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:34:53.204173 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:34:53.205531 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:34:53.207342 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:34:53.211221 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:34:53.215332 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:34:53.218883 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:34:53.232097 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 12 17:34:53.232108 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 12 17:34:53.234914 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:34:53.237848 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:34:53.243895 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:34:53.253324 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:34:53.254664 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:34:53.257533 kernel: loop1: detected capacity change from 0 to 200800 Dec 12 17:34:53.277518 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:34:53.280168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:34:53.283892 kernel: loop2: detected capacity change from 0 to 100632 Dec 12 17:34:53.300268 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 12 17:34:53.300284 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 12 17:34:53.305251 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:34:53.314510 kernel: loop3: detected capacity change from 0 to 119840 Dec 12 17:34:53.319494 kernel: loop4: detected capacity change from 0 to 200800 Dec 12 17:34:53.326506 kernel: loop5: detected capacity change from 0 to 100632 Dec 12 17:34:53.331140 (sd-merge)[1231]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:34:53.331564 (sd-merge)[1231]: Merged extensions into '/usr'. Dec 12 17:34:53.335198 systemd[1]: Reload requested from client PID 1204 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:34:53.335217 systemd[1]: Reloading... Dec 12 17:34:53.404507 zram_generator::config[1266]: No configuration found. Dec 12 17:34:53.503015 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:34:53.533660 systemd[1]: Reloading finished in 197 ms. Dec 12 17:34:53.565414 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:34:53.568523 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:34:53.585851 systemd[1]: Starting ensure-sysext.service... Dec 12 17:34:53.587752 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:34:53.600071 systemd[1]: Reload requested from client PID 1291 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:34:53.600086 systemd[1]: Reloading... Dec 12 17:34:53.602462 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:34:53.602820 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:34:53.603122 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:34:53.603421 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:34:53.604120 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:34:53.604442 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Dec 12 17:34:53.604662 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Dec 12 17:34:53.612846 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:34:53.612980 systemd-tmpfiles[1292]: Skipping /boot Dec 12 17:34:53.618938 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:34:53.619056 systemd-tmpfiles[1292]: Skipping /boot Dec 12 17:34:53.655909 zram_generator::config[1319]: No configuration found. Dec 12 17:34:53.788950 systemd[1]: Reloading finished in 188 ms. Dec 12 17:34:53.799313 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:34:53.807520 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:34:53.818536 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:34:53.821194 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:34:53.829372 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:34:53.832709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:34:53.835634 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:34:53.839613 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:34:53.844457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:34:53.846793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:34:53.854498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:34:53.857634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:34:53.859732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:34:53.859866 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:34:53.861755 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:34:53.864376 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:34:53.866395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:34:53.868798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:34:53.870811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:34:53.877839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:34:53.882714 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:34:53.882901 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:34:53.885004 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Dec 12 17:34:53.888098 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:34:53.896744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:34:53.899008 augenrules[1390]: No rules Dec 12 17:34:53.900770 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:34:53.903905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:34:53.920609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:34:53.922167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:34:53.922314 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:34:53.926302 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:34:53.928358 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:34:53.930833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:34:53.933940 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:34:53.939619 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:34:53.939887 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:34:53.943518 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:34:53.946133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:34:53.946299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:34:53.950342 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:34:53.958191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:34:53.960031 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:34:53.960182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:34:53.980115 systemd[1]: Finished ensure-sysext.service. Dec 12 17:34:53.985612 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:34:53.992270 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:34:53.995018 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:34:53.997677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:34:53.998630 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:34:54.005603 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:34:54.010433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:34:54.013543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:34:54.016700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:34:54.016753 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:34:54.023323 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:34:54.029225 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:34:54.030352 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:34:54.030916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:34:54.031548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:34:54.033809 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:34:54.033964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:34:54.037016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:34:54.037185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:34:54.038632 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:34:54.038782 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:34:54.047781 systemd-resolved[1358]: Positive Trust Anchors: Dec 12 17:34:54.048278 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:34:54.048493 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:34:54.049631 augenrules[1441]: /sbin/augenrules: No change Dec 12 17:34:54.057485 systemd-resolved[1358]: Defaulting to hostname 'linux'. Dec 12 17:34:54.057720 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:34:54.059034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:34:54.060416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:34:54.065360 augenrules[1472]: No rules Dec 12 17:34:54.068790 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:34:54.070009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:34:54.070081 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:34:54.070448 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:34:54.070682 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:34:54.090815 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:34:54.118419 systemd-networkd[1449]: lo: Link UP Dec 12 17:34:54.118426 systemd-networkd[1449]: lo: Gained carrier Dec 12 17:34:54.119258 systemd-networkd[1449]: Enumeration completed Dec 12 17:34:54.119381 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:34:54.119737 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:34:54.119741 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:34:54.120337 systemd-networkd[1449]: eth0: Link UP Dec 12 17:34:54.120443 systemd-networkd[1449]: eth0: Gained carrier Dec 12 17:34:54.120458 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:34:54.121121 systemd[1]: Reached target network.target - Network. Dec 12 17:34:54.123945 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:34:54.126607 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:34:54.129715 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:34:54.130971 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:34:54.132030 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:34:54.133182 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:34:54.134442 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:34:54.135537 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:34:54.135567 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:34:54.136378 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:34:54.137575 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:34:54.138535 systemd-networkd[1449]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:34:54.138606 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:34:54.139801 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:34:54.140047 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. Dec 12 17:34:54.141038 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:34:54.141095 systemd-timesyncd[1455]: Initial clock synchronization to Fri 2025-12-12 17:34:53.906016 UTC. Dec 12 17:34:54.141549 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:34:54.145868 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:34:54.148601 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:34:54.149950 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:34:54.151619 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:34:54.158112 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:34:54.160873 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:34:54.162975 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:34:54.164375 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:34:54.166066 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:34:54.167043 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:34:54.168505 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:34:54.168533 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:34:54.169624 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:34:54.171632 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:34:54.175630 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:34:54.186061 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:34:54.189719 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:34:54.190698 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:34:54.193081 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:34:54.195294 jq[1505]: false Dec 12 17:34:54.195720 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:34:54.200140 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:34:54.203802 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:34:54.217214 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:34:54.219050 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:34:54.219585 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:34:54.221326 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:34:54.222652 extend-filesystems[1506]: Found /dev/vda6 Dec 12 17:34:54.224796 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:34:54.227705 extend-filesystems[1506]: Found /dev/vda9 Dec 12 17:34:54.228615 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:34:54.230799 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:34:54.230976 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:34:54.231210 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:34:54.231376 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:34:54.236036 extend-filesystems[1506]: Checking size of /dev/vda9 Dec 12 17:34:54.237781 jq[1525]: true Dec 12 17:34:54.235145 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:34:54.235521 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:34:54.251035 extend-filesystems[1506]: Resized partition /dev/vda9 Dec 12 17:34:54.252194 (ntainerd)[1532]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:34:54.253696 extend-filesystems[1545]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:34:54.256092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:34:54.262101 update_engine[1522]: I20251212 17:34:54.261802 1522 main.cc:92] Flatcar Update Engine starting Dec 12 17:34:54.263585 jq[1533]: true Dec 12 17:34:54.267496 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:34:54.277541 tar[1529]: linux-arm64/LICENSE Dec 12 17:34:54.277541 tar[1529]: linux-arm64/helm Dec 12 17:34:54.288461 dbus-daemon[1501]: [system] SELinux support is enabled Dec 12 17:34:54.289807 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:34:54.296415 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:34:54.296447 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:34:54.298984 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:34:54.299010 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:34:54.299666 update_engine[1522]: I20251212 17:34:54.299609 1522 update_check_scheduler.cc:74] Next update check in 4m14s Dec 12 17:34:54.300736 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:34:54.314504 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:34:54.316702 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:34:54.335633 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:34:54.335633 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:34:54.335633 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:34:54.341543 extend-filesystems[1506]: Resized filesystem in /dev/vda9 Dec 12 17:34:54.339636 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:34:54.347480 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:34:54.351205 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:34:54.351566 systemd-logind[1517]: New seat seat0. Dec 12 17:34:54.360400 bash[1567]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:34:54.383801 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:34:54.385102 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:34:54.389506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:34:54.391960 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:34:54.396081 locksmithd[1552]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:34:54.434028 containerd[1532]: time="2025-12-12T17:34:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:34:54.434976 containerd[1532]: time="2025-12-12T17:34:54.434940840Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:34:54.443728 containerd[1532]: time="2025-12-12T17:34:54.443676120Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.2µs" Dec 12 17:34:54.443728 containerd[1532]: time="2025-12-12T17:34:54.443716280Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:34:54.443805 containerd[1532]: time="2025-12-12T17:34:54.443733840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:34:54.443919 containerd[1532]: time="2025-12-12T17:34:54.443888440Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:34:54.443919 containerd[1532]: time="2025-12-12T17:34:54.443910200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:34:54.443982 containerd[1532]: time="2025-12-12T17:34:54.443933960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444000 containerd[1532]: time="2025-12-12T17:34:54.443983720Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444000 containerd[1532]: time="2025-12-12T17:34:54.443994960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444245 containerd[1532]: time="2025-12-12T17:34:54.444208240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444245 containerd[1532]: time="2025-12-12T17:34:54.444232520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444299 containerd[1532]: time="2025-12-12T17:34:54.444245080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444299 containerd[1532]: time="2025-12-12T17:34:54.444253960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444358 containerd[1532]: time="2025-12-12T17:34:54.444339480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444587 containerd[1532]: time="2025-12-12T17:34:54.444566480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444618 containerd[1532]: time="2025-12-12T17:34:54.444602320Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:34:54.444639 containerd[1532]: time="2025-12-12T17:34:54.444619840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:34:54.444662 containerd[1532]: time="2025-12-12T17:34:54.444650680Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:34:54.444880 containerd[1532]: time="2025-12-12T17:34:54.444860840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:34:54.444946 containerd[1532]: time="2025-12-12T17:34:54.444930440Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:34:54.448371 containerd[1532]: time="2025-12-12T17:34:54.448332680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:34:54.448444 containerd[1532]: time="2025-12-12T17:34:54.448401200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:34:54.448444 containerd[1532]: time="2025-12-12T17:34:54.448414880Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:34:54.448444 containerd[1532]: time="2025-12-12T17:34:54.448426160Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:34:54.448444 containerd[1532]: time="2025-12-12T17:34:54.448438840Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:34:54.448562 containerd[1532]: time="2025-12-12T17:34:54.448506200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:34:54.448562 containerd[1532]: time="2025-12-12T17:34:54.448521760Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:34:54.448562 containerd[1532]: time="2025-12-12T17:34:54.448543760Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:34:54.448562 containerd[1532]: time="2025-12-12T17:34:54.448554880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:34:54.448624 containerd[1532]: time="2025-12-12T17:34:54.448576240Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:34:54.448624 containerd[1532]: time="2025-12-12T17:34:54.448585880Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:34:54.448624 containerd[1532]: time="2025-12-12T17:34:54.448600720Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:34:54.448742 containerd[1532]: time="2025-12-12T17:34:54.448721200Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:34:54.448767 containerd[1532]: time="2025-12-12T17:34:54.448747480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:34:54.448784 containerd[1532]: time="2025-12-12T17:34:54.448766000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:34:54.448784 containerd[1532]: time="2025-12-12T17:34:54.448778080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:34:54.448815 containerd[1532]: time="2025-12-12T17:34:54.448788840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:34:54.448815 containerd[1532]: time="2025-12-12T17:34:54.448799080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:34:54.448815 containerd[1532]: time="2025-12-12T17:34:54.448811160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:34:54.448868 containerd[1532]: time="2025-12-12T17:34:54.448820640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:34:54.448868 containerd[1532]: time="2025-12-12T17:34:54.448832000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:34:54.448868 containerd[1532]: time="2025-12-12T17:34:54.448842320Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:34:54.448868 containerd[1532]: time="2025-12-12T17:34:54.448852200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:34:54.449034 containerd[1532]: time="2025-12-12T17:34:54.449019560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:34:54.449058 containerd[1532]: time="2025-12-12T17:34:54.449037760Z" level=info msg="Start snapshots syncer" Dec 12 17:34:54.449058 containerd[1532]: time="2025-12-12T17:34:54.449063600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:34:54.450523 containerd[1532]: time="2025-12-12T17:34:54.450442960Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:34:54.450740 containerd[1532]: time="2025-12-12T17:34:54.450717880Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:34:54.450857 containerd[1532]: time="2025-12-12T17:34:54.450836920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:34:54.451037 containerd[1532]: time="2025-12-12T17:34:54.451009200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:34:54.451120 containerd[1532]: time="2025-12-12T17:34:54.451101600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:34:54.451174 containerd[1532]: time="2025-12-12T17:34:54.451161480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:34:54.451253 containerd[1532]: time="2025-12-12T17:34:54.451216080Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:34:54.451342 containerd[1532]: time="2025-12-12T17:34:54.451326120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:34:54.451411 containerd[1532]: time="2025-12-12T17:34:54.451395400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:34:54.451479 containerd[1532]: time="2025-12-12T17:34:54.451450840Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:34:54.451578 containerd[1532]: time="2025-12-12T17:34:54.451563040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:34:54.451635 containerd[1532]: time="2025-12-12T17:34:54.451620040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:34:54.451722 containerd[1532]: time="2025-12-12T17:34:54.451707080Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:34:54.451822 containerd[1532]: time="2025-12-12T17:34:54.451806120Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:34:54.451883 containerd[1532]: time="2025-12-12T17:34:54.451869120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:34:54.451934 containerd[1532]: time="2025-12-12T17:34:54.451917240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:34:54.451991 containerd[1532]: time="2025-12-12T17:34:54.451973360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:34:54.452037 containerd[1532]: time="2025-12-12T17:34:54.452024400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:34:54.452091 containerd[1532]: time="2025-12-12T17:34:54.452077880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:34:54.452146 containerd[1532]: time="2025-12-12T17:34:54.452130800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:34:54.452279 containerd[1532]: time="2025-12-12T17:34:54.452265320Z" level=info msg="runtime interface created" Dec 12 17:34:54.452843 containerd[1532]: time="2025-12-12T17:34:54.452326120Z" level=info msg="created NRI interface" Dec 12 17:34:54.452843 containerd[1532]: time="2025-12-12T17:34:54.452347840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:34:54.452843 containerd[1532]: time="2025-12-12T17:34:54.452365360Z" level=info msg="Connect containerd service" Dec 12 17:34:54.452843 containerd[1532]: time="2025-12-12T17:34:54.452400600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:34:54.453596 containerd[1532]: time="2025-12-12T17:34:54.453565400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:34:54.527448 containerd[1532]: time="2025-12-12T17:34:54.527368160Z" level=info msg="Start subscribing containerd event" Dec 12 17:34:54.527448 containerd[1532]: time="2025-12-12T17:34:54.527460640Z" level=info msg="Start recovering state" Dec 12 17:34:54.527584 containerd[1532]: time="2025-12-12T17:34:54.527567160Z" level=info msg="Start event monitor" Dec 12 17:34:54.527584 containerd[1532]: time="2025-12-12T17:34:54.527581840Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:34:54.527619 containerd[1532]: time="2025-12-12T17:34:54.527588840Z" level=info msg="Start streaming server" Dec 12 17:34:54.527619 containerd[1532]: time="2025-12-12T17:34:54.527597840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:34:54.527619 containerd[1532]: time="2025-12-12T17:34:54.527606000Z" level=info msg="runtime interface starting up..." Dec 12 17:34:54.527619 containerd[1532]: time="2025-12-12T17:34:54.527611520Z" level=info msg="starting plugins..." Dec 12 17:34:54.527703 containerd[1532]: time="2025-12-12T17:34:54.527625480Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:34:54.528080 containerd[1532]: time="2025-12-12T17:34:54.528053120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:34:54.528279 containerd[1532]: time="2025-12-12T17:34:54.528213200Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:34:54.528433 containerd[1532]: time="2025-12-12T17:34:54.528420520Z" level=info msg="containerd successfully booted in 0.094721s" Dec 12 17:34:54.528597 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:34:54.617043 tar[1529]: linux-arm64/README.md Dec 12 17:34:54.632984 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:34:55.437668 systemd-networkd[1449]: eth0: Gained IPv6LL Dec 12 17:34:55.441519 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:34:55.443266 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:34:55.445929 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:34:55.448618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:34:55.450683 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:34:55.482703 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:34:55.482927 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:34:55.485078 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:34:55.487984 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:34:55.594358 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:34:55.613820 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:34:55.617846 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:34:55.638039 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:34:55.638272 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:34:55.642134 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:34:55.667435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:34:55.670325 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:34:55.672632 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:34:55.674210 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:34:56.060311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:34:56.062112 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:34:56.064879 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:34:56.067798 systemd[1]: Startup finished in 2.131s (kernel) + 4.857s (initrd) + 3.656s (userspace) = 10.645s. Dec 12 17:34:56.408157 kubelet[1642]: E1212 17:34:56.408047 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:34:56.410473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:34:56.410599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:34:56.412555 systemd[1]: kubelet.service: Consumed 707ms CPU time, 248.2M memory peak. Dec 12 17:35:00.544596 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:35:00.545730 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:35250.service - OpenSSH per-connection server daemon (10.0.0.1:35250). Dec 12 17:35:00.631402 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 35250 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:00.633322 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:00.639738 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:35:00.640669 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:35:00.646036 systemd-logind[1517]: New session 1 of user core. Dec 12 17:35:00.664139 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:35:00.667794 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:35:00.691838 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:35:00.694235 systemd-logind[1517]: New session c1 of user core. Dec 12 17:35:00.808311 systemd[1660]: Queued start job for default target default.target. Dec 12 17:35:00.818530 systemd[1660]: Created slice app.slice - User Application Slice. Dec 12 17:35:00.818557 systemd[1660]: Reached target paths.target - Paths. Dec 12 17:35:00.818599 systemd[1660]: Reached target timers.target - Timers. Dec 12 17:35:00.819865 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:35:00.829545 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:35:00.829623 systemd[1660]: Reached target sockets.target - Sockets. Dec 12 17:35:00.829664 systemd[1660]: Reached target basic.target - Basic System. Dec 12 17:35:00.829692 systemd[1660]: Reached target default.target - Main User Target. Dec 12 17:35:00.829718 systemd[1660]: Startup finished in 129ms. Dec 12 17:35:00.829876 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:35:00.831433 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:35:00.892220 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:35260.service - OpenSSH per-connection server daemon (10.0.0.1:35260). Dec 12 17:35:00.962354 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 35260 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:00.963803 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:00.970099 systemd-logind[1517]: New session 2 of user core. Dec 12 17:35:00.982712 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:35:01.035987 sshd[1674]: Connection closed by 10.0.0.1 port 35260 Dec 12 17:35:01.036511 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:01.062023 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:35260.service: Deactivated successfully. Dec 12 17:35:01.063748 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:35:01.066265 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:35:01.068715 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:53198.service - OpenSSH per-connection server daemon (10.0.0.1:53198). Dec 12 17:35:01.070848 systemd-logind[1517]: Removed session 2. Dec 12 17:35:01.135181 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 53198 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:01.137183 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:01.142606 systemd-logind[1517]: New session 3 of user core. Dec 12 17:35:01.150709 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:35:01.236733 sshd[1683]: Connection closed by 10.0.0.1 port 53198 Dec 12 17:35:01.237246 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:01.251240 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:53198.service: Deactivated successfully. Dec 12 17:35:01.253949 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:35:01.254740 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:35:01.257759 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:53208.service - OpenSSH per-connection server daemon (10.0.0.1:53208). Dec 12 17:35:01.258425 systemd-logind[1517]: Removed session 3. Dec 12 17:35:01.320643 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 53208 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:01.322921 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:01.327565 systemd-logind[1517]: New session 4 of user core. Dec 12 17:35:01.335924 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:35:01.390211 sshd[1692]: Connection closed by 10.0.0.1 port 53208 Dec 12 17:35:01.390671 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:01.410718 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:53208.service: Deactivated successfully. Dec 12 17:35:01.412920 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:35:01.413791 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:35:01.416320 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:53222.service - OpenSSH per-connection server daemon (10.0.0.1:53222). Dec 12 17:35:01.416952 systemd-logind[1517]: Removed session 4. Dec 12 17:35:01.480761 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 53222 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:01.482203 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:01.486571 systemd-logind[1517]: New session 5 of user core. Dec 12 17:35:01.502678 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:35:01.559268 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:35:01.559560 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:35:01.571486 sudo[1702]: pam_unix(sudo:session): session closed for user root Dec 12 17:35:01.573049 sshd[1701]: Connection closed by 10.0.0.1 port 53222 Dec 12 17:35:01.573517 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:01.586711 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:53222.service: Deactivated successfully. Dec 12 17:35:01.589889 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:35:01.590672 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:35:01.593345 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:53234.service - OpenSSH per-connection server daemon (10.0.0.1:53234). Dec 12 17:35:01.593974 systemd-logind[1517]: Removed session 5. Dec 12 17:35:01.660905 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 53234 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:01.662287 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:01.667737 systemd-logind[1517]: New session 6 of user core. Dec 12 17:35:01.679695 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:35:01.732534 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:35:01.732823 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:35:01.808760 sudo[1713]: pam_unix(sudo:session): session closed for user root Dec 12 17:35:01.816109 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:35:01.816927 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:35:01.826764 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:35:01.877362 augenrules[1735]: No rules Dec 12 17:35:01.878649 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:35:01.878934 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:35:01.880403 sudo[1712]: pam_unix(sudo:session): session closed for user root Dec 12 17:35:01.882240 sshd[1711]: Connection closed by 10.0.0.1 port 53234 Dec 12 17:35:01.882756 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:01.892803 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:53234.service: Deactivated successfully. Dec 12 17:35:01.895175 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:35:01.897266 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:35:01.900383 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:53250.service - OpenSSH per-connection server daemon (10.0.0.1:53250). Dec 12 17:35:01.901009 systemd-logind[1517]: Removed session 6. Dec 12 17:35:01.963846 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 53250 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:35:01.965366 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:35:01.970819 systemd-logind[1517]: New session 7 of user core. Dec 12 17:35:01.981685 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:35:02.032545 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:35:02.032927 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:35:02.386869 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:35:02.408891 (dockerd)[1768]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:35:02.641409 dockerd[1768]: time="2025-12-12T17:35:02.641269724Z" level=info msg="Starting up" Dec 12 17:35:02.642197 dockerd[1768]: time="2025-12-12T17:35:02.642171127Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:35:02.653114 dockerd[1768]: time="2025-12-12T17:35:02.653070835Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:35:02.692133 dockerd[1768]: time="2025-12-12T17:35:02.692079543Z" level=info msg="Loading containers: start." Dec 12 17:35:02.701481 kernel: Initializing XFRM netlink socket Dec 12 17:35:02.915599 systemd-networkd[1449]: docker0: Link UP Dec 12 17:35:02.919924 dockerd[1768]: time="2025-12-12T17:35:02.919859770Z" level=info msg="Loading containers: done." Dec 12 17:35:02.935272 dockerd[1768]: time="2025-12-12T17:35:02.935203151Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:35:02.935450 dockerd[1768]: time="2025-12-12T17:35:02.935299942Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:35:02.935450 dockerd[1768]: time="2025-12-12T17:35:02.935395903Z" level=info msg="Initializing buildkit" Dec 12 17:35:02.959710 dockerd[1768]: time="2025-12-12T17:35:02.959670322Z" level=info msg="Completed buildkit initialization" Dec 12 17:35:02.964823 dockerd[1768]: time="2025-12-12T17:35:02.964732420Z" level=info msg="Daemon has completed initialization" Dec 12 17:35:02.964935 dockerd[1768]: time="2025-12-12T17:35:02.964790621Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:35:02.965023 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:35:03.439622 containerd[1532]: time="2025-12-12T17:35:03.439585571Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 17:35:04.038984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1790242485.mount: Deactivated successfully. Dec 12 17:35:05.066914 containerd[1532]: time="2025-12-12T17:35:05.066861677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:05.067680 containerd[1532]: time="2025-12-12T17:35:05.067646487Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571042" Dec 12 17:35:05.069424 containerd[1532]: time="2025-12-12T17:35:05.068680079Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:05.071559 containerd[1532]: time="2025-12-12T17:35:05.071526516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:05.072593 containerd[1532]: time="2025-12-12T17:35:05.072571221Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.632946824s" Dec 12 17:35:05.072682 containerd[1532]: time="2025-12-12T17:35:05.072668226Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 12 17:35:05.073289 containerd[1532]: time="2025-12-12T17:35:05.073213302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 17:35:06.102173 containerd[1532]: time="2025-12-12T17:35:06.102105286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:06.103363 containerd[1532]: time="2025-12-12T17:35:06.103326177Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135479" Dec 12 17:35:06.105486 containerd[1532]: time="2025-12-12T17:35:06.105220245Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:06.108728 containerd[1532]: time="2025-12-12T17:35:06.108690505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:06.110168 containerd[1532]: time="2025-12-12T17:35:06.110135986Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.03682456s" Dec 12 17:35:06.110224 containerd[1532]: time="2025-12-12T17:35:06.110179093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 12 17:35:06.110826 containerd[1532]: time="2025-12-12T17:35:06.110596730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 17:35:06.661001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:35:06.662400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:35:06.825303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:35:06.829425 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:35:06.933766 kubelet[2057]: E1212 17:35:06.933648 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:35:06.936934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:35:06.937314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:35:06.938665 systemd[1]: kubelet.service: Consumed 157ms CPU time, 108M memory peak. Dec 12 17:35:07.106026 containerd[1532]: time="2025-12-12T17:35:07.105977254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:07.108613 containerd[1532]: time="2025-12-12T17:35:07.108570224Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191718" Dec 12 17:35:07.109788 containerd[1532]: time="2025-12-12T17:35:07.109733582Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:07.118348 containerd[1532]: time="2025-12-12T17:35:07.118270304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:07.119327 containerd[1532]: time="2025-12-12T17:35:07.119297830Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.008668076s" Dec 12 17:35:07.119383 containerd[1532]: time="2025-12-12T17:35:07.119329203Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 12 17:35:07.120207 containerd[1532]: time="2025-12-12T17:35:07.120187535Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 17:35:08.256543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132186504.mount: Deactivated successfully. Dec 12 17:35:08.555616 containerd[1532]: time="2025-12-12T17:35:08.555455753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:08.556800 containerd[1532]: time="2025-12-12T17:35:08.556771109Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805255" Dec 12 17:35:08.558082 containerd[1532]: time="2025-12-12T17:35:08.558042773Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:08.561415 containerd[1532]: time="2025-12-12T17:35:08.561132458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:08.562120 containerd[1532]: time="2025-12-12T17:35:08.562071253Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.441858264s" Dec 12 17:35:08.562120 containerd[1532]: time="2025-12-12T17:35:08.562109653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 12 17:35:08.562741 containerd[1532]: time="2025-12-12T17:35:08.562693058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 17:35:09.149728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118386516.mount: Deactivated successfully. Dec 12 17:35:10.550634 containerd[1532]: time="2025-12-12T17:35:10.550554445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:10.552810 containerd[1532]: time="2025-12-12T17:35:10.551036926Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Dec 12 17:35:10.552810 containerd[1532]: time="2025-12-12T17:35:10.552250976Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:10.555570 containerd[1532]: time="2025-12-12T17:35:10.555505547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:10.556603 containerd[1532]: time="2025-12-12T17:35:10.556561785Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.993831945s" Dec 12 17:35:10.556603 containerd[1532]: time="2025-12-12T17:35:10.556600830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 12 17:35:10.557303 containerd[1532]: time="2025-12-12T17:35:10.557159567Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 17:35:11.100446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1242949034.mount: Deactivated successfully. Dec 12 17:35:11.293000 containerd[1532]: time="2025-12-12T17:35:11.292808650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:11.354161 containerd[1532]: time="2025-12-12T17:35:11.354025804Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Dec 12 17:35:11.369011 containerd[1532]: time="2025-12-12T17:35:11.368920856Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:11.374196 containerd[1532]: time="2025-12-12T17:35:11.374125746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:11.374937 containerd[1532]: time="2025-12-12T17:35:11.374889728Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 817.698044ms" Dec 12 17:35:11.374937 containerd[1532]: time="2025-12-12T17:35:11.374926400Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 12 17:35:11.375450 containerd[1532]: time="2025-12-12T17:35:11.375411233Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 17:35:11.888485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339893704.mount: Deactivated successfully. Dec 12 17:35:14.664267 containerd[1532]: time="2025-12-12T17:35:14.664205354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:14.668131 containerd[1532]: time="2025-12-12T17:35:14.668057624Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062989" Dec 12 17:35:14.669326 containerd[1532]: time="2025-12-12T17:35:14.669262459Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:14.673744 containerd[1532]: time="2025-12-12T17:35:14.673663572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:14.675533 containerd[1532]: time="2025-12-12T17:35:14.675456916Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.300011552s" Dec 12 17:35:14.675533 containerd[1532]: time="2025-12-12T17:35:14.675512547Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 12 17:35:17.187462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:35:17.189234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:35:17.345737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:35:17.349732 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:35:17.385897 kubelet[2218]: E1212 17:35:17.385849 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:35:17.388579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:35:17.388873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:35:17.390558 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.2M memory peak. Dec 12 17:35:19.281776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:35:19.282255 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.2M memory peak. Dec 12 17:35:19.286200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:35:19.313242 systemd[1]: Reload requested from client PID 2234 ('systemctl') (unit session-7.scope)... Dec 12 17:35:19.313257 systemd[1]: Reloading... Dec 12 17:35:19.393501 zram_generator::config[2276]: No configuration found. Dec 12 17:35:19.639730 systemd[1]: Reloading finished in 326 ms. Dec 12 17:35:19.705068 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:35:19.705154 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:35:19.705392 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:35:19.705443 systemd[1]: kubelet.service: Consumed 100ms CPU time, 95.1M memory peak. Dec 12 17:35:19.706945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:35:19.932149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:35:19.950841 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:35:19.983263 kubelet[2321]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:35:19.983263 kubelet[2321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:35:19.983727 kubelet[2321]: I1212 17:35:19.983695 2321 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:35:20.394903 kubelet[2321]: I1212 17:35:20.394859 2321 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:35:20.394903 kubelet[2321]: I1212 17:35:20.394895 2321 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:35:20.396179 kubelet[2321]: I1212 17:35:20.396155 2321 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:35:20.396224 kubelet[2321]: I1212 17:35:20.396183 2321 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:35:20.396464 kubelet[2321]: I1212 17:35:20.396440 2321 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:35:20.455700 kubelet[2321]: E1212 17:35:20.455659 2321 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:35:20.456517 kubelet[2321]: I1212 17:35:20.456498 2321 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:35:20.460899 kubelet[2321]: I1212 17:35:20.460869 2321 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:35:20.467661 kubelet[2321]: I1212 17:35:20.467630 2321 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:35:20.467851 kubelet[2321]: I1212 17:35:20.467821 2321 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:35:20.470860 kubelet[2321]: I1212 17:35:20.467851 2321 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:35:20.470978 kubelet[2321]: I1212 17:35:20.470869 2321 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:35:20.470978 kubelet[2321]: I1212 17:35:20.470882 2321 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:35:20.471037 kubelet[2321]: I1212 17:35:20.471017 2321 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:35:20.475029 kubelet[2321]: I1212 17:35:20.475003 2321 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:35:20.477936 kubelet[2321]: I1212 17:35:20.477905 2321 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:35:20.477936 kubelet[2321]: I1212 17:35:20.477934 2321 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:35:20.478016 kubelet[2321]: I1212 17:35:20.477963 2321 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:35:20.478610 kubelet[2321]: E1212 17:35:20.478566 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:35:20.480341 kubelet[2321]: I1212 17:35:20.479852 2321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:35:20.480937 kubelet[2321]: E1212 17:35:20.480900 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:35:20.481040 kubelet[2321]: I1212 17:35:20.481021 2321 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:35:20.481713 kubelet[2321]: I1212 17:35:20.481685 2321 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:35:20.481771 kubelet[2321]: I1212 17:35:20.481724 2321 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:35:20.481771 kubelet[2321]: W1212 17:35:20.481770 2321 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:35:20.485354 kubelet[2321]: I1212 17:35:20.485025 2321 server.go:1262] "Started kubelet" Dec 12 17:35:20.486322 kubelet[2321]: I1212 17:35:20.486293 2321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:35:20.487505 kubelet[2321]: I1212 17:35:20.487444 2321 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:35:20.487965 kubelet[2321]: I1212 17:35:20.487943 2321 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:35:20.488620 kubelet[2321]: I1212 17:35:20.488596 2321 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:35:20.490359 kubelet[2321]: I1212 17:35:20.490318 2321 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:35:20.490847 kubelet[2321]: I1212 17:35:20.490507 2321 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:35:20.490847 kubelet[2321]: I1212 17:35:20.490755 2321 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:35:20.491361 kubelet[2321]: E1212 17:35:20.491342 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:35:20.491461 kubelet[2321]: I1212 17:35:20.491451 2321 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:35:20.491784 kubelet[2321]: I1212 17:35:20.491761 2321 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:35:20.491897 kubelet[2321]: I1212 17:35:20.491886 2321 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:35:20.492241 kubelet[2321]: E1212 17:35:20.492217 2321 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:35:20.492541 kubelet[2321]: I1212 17:35:20.492509 2321 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:35:20.492704 kubelet[2321]: I1212 17:35:20.492604 2321 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:35:20.492704 kubelet[2321]: E1212 17:35:20.492642 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:35:20.493226 kubelet[2321]: E1212 17:35:20.493141 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Dec 12 17:35:20.493607 kubelet[2321]: I1212 17:35:20.493587 2321 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:35:20.494600 kubelet[2321]: E1212 17:35:20.492755 2321 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880884d86d1ab14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:35:20.484981524 +0000 UTC m=+0.531137984,LastTimestamp:2025-12-12 17:35:20.484981524 +0000 UTC m=+0.531137984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:35:20.504739 kubelet[2321]: I1212 17:35:20.504708 2321 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:35:20.504739 kubelet[2321]: I1212 17:35:20.504727 2321 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:35:20.505402 kubelet[2321]: I1212 17:35:20.505350 2321 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:35:20.510003 kubelet[2321]: I1212 17:35:20.509576 2321 policy_none.go:49] "None policy: Start" Dec 12 17:35:20.510003 kubelet[2321]: I1212 17:35:20.509603 2321 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:35:20.510003 kubelet[2321]: I1212 17:35:20.509614 2321 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:35:20.510193 kubelet[2321]: I1212 17:35:20.510163 2321 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:35:20.511381 kubelet[2321]: I1212 17:35:20.511362 2321 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:35:20.511491 kubelet[2321]: I1212 17:35:20.511481 2321 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:35:20.511576 kubelet[2321]: I1212 17:35:20.511567 2321 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:35:20.511672 kubelet[2321]: E1212 17:35:20.511656 2321 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:35:20.511727 kubelet[2321]: I1212 17:35:20.511498 2321 policy_none.go:47] "Start" Dec 12 17:35:20.511994 kubelet[2321]: E1212 17:35:20.511958 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:35:20.517675 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:35:20.532057 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:35:20.534977 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:35:20.549315 kubelet[2321]: E1212 17:35:20.548935 2321 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:35:20.549315 kubelet[2321]: I1212 17:35:20.549146 2321 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:35:20.549315 kubelet[2321]: I1212 17:35:20.549156 2321 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:35:20.549704 kubelet[2321]: I1212 17:35:20.549637 2321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:35:20.550297 kubelet[2321]: E1212 17:35:20.550260 2321 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:35:20.550353 kubelet[2321]: E1212 17:35:20.550316 2321 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:35:20.621544 systemd[1]: Created slice kubepods-burstable-poddf559d73eed3c7e718a893f6daad0217.slice - libcontainer container kubepods-burstable-poddf559d73eed3c7e718a893f6daad0217.slice. Dec 12 17:35:20.645880 kubelet[2321]: E1212 17:35:20.645769 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:35:20.649019 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 12 17:35:20.650324 kubelet[2321]: I1212 17:35:20.650304 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:35:20.650719 kubelet[2321]: E1212 17:35:20.650686 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Dec 12 17:35:20.659562 kubelet[2321]: E1212 17:35:20.659525 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:35:20.662639 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 12 17:35:20.664738 kubelet[2321]: E1212 17:35:20.664704 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:35:20.694343 kubelet[2321]: E1212 17:35:20.694278 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Dec 12 17:35:20.793571 kubelet[2321]: I1212 17:35:20.793514 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df559d73eed3c7e718a893f6daad0217-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df559d73eed3c7e718a893f6daad0217\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:20.793571 kubelet[2321]: I1212 17:35:20.793556 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df559d73eed3c7e718a893f6daad0217-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df559d73eed3c7e718a893f6daad0217\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:20.793721 kubelet[2321]: I1212 17:35:20.793577 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:20.793721 kubelet[2321]: I1212 17:35:20.793694 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:20.793721 kubelet[2321]: I1212 17:35:20.793712 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:20.793810 kubelet[2321]: I1212 17:35:20.793732 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:35:20.793810 kubelet[2321]: I1212 17:35:20.793774 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:20.793810 kubelet[2321]: I1212 17:35:20.793788 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:20.793810 kubelet[2321]: I1212 17:35:20.793801 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df559d73eed3c7e718a893f6daad0217-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df559d73eed3c7e718a893f6daad0217\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:20.852720 kubelet[2321]: I1212 17:35:20.852655 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:35:20.853076 kubelet[2321]: E1212 17:35:20.853035 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Dec 12 17:35:20.950601 containerd[1532]: time="2025-12-12T17:35:20.950496870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df559d73eed3c7e718a893f6daad0217,Namespace:kube-system,Attempt:0,}" Dec 12 17:35:20.964151 containerd[1532]: time="2025-12-12T17:35:20.964064231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 12 17:35:20.967734 containerd[1532]: time="2025-12-12T17:35:20.967684613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 12 17:35:21.094886 kubelet[2321]: E1212 17:35:21.094844 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Dec 12 17:35:21.254810 kubelet[2321]: I1212 17:35:21.254778 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:35:21.255494 kubelet[2321]: E1212 17:35:21.255411 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Dec 12 17:35:21.350547 kubelet[2321]: E1212 17:35:21.350499 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:35:21.399084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161618341.mount: Deactivated successfully. Dec 12 17:35:21.406633 containerd[1532]: time="2025-12-12T17:35:21.406563620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:35:21.410289 containerd[1532]: time="2025-12-12T17:35:21.410243900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:35:21.411379 containerd[1532]: time="2025-12-12T17:35:21.411327391Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:35:21.413063 containerd[1532]: time="2025-12-12T17:35:21.413009455Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:35:21.414048 containerd[1532]: time="2025-12-12T17:35:21.414005945Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:35:21.416013 containerd[1532]: time="2025-12-12T17:35:21.415932586Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:35:21.416700 containerd[1532]: time="2025-12-12T17:35:21.416656885Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:35:21.417713 containerd[1532]: time="2025-12-12T17:35:21.417655333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:35:21.419512 containerd[1532]: time="2025-12-12T17:35:21.419164555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 449.921648ms" Dec 12 17:35:21.419921 containerd[1532]: time="2025-12-12T17:35:21.419894529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 465.863027ms" Dec 12 17:35:21.422482 containerd[1532]: time="2025-12-12T17:35:21.422270639Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 456.872959ms" Dec 12 17:35:21.453718 containerd[1532]: time="2025-12-12T17:35:21.453665055Z" level=info msg="connecting to shim 14c680da74b872c67aa421f59d63e1dc9b78a4e9640b46cf52488d9ea413e716" address="unix:///run/containerd/s/c3957b892c5bb7d8ea9c57dbdd83f3fe15489227d0b7a0c56eb2a15650f7e301" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:21.457219 containerd[1532]: time="2025-12-12T17:35:21.457164340Z" level=info msg="connecting to shim 8775069daa61c25587e21a89d6c506588678e65beb4075323962438f08d42ff2" address="unix:///run/containerd/s/de9ef5a5b1f8ac66da7775680db60e04abea0a0cb2e807ba3a42a36e4a23830f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:21.459856 containerd[1532]: time="2025-12-12T17:35:21.459447695Z" level=info msg="connecting to shim 579486579019bb784e3a8f64f4dcd4099ea2f358923c385db5092372147ced37" address="unix:///run/containerd/s/15e57e79b8ad6721c8da7df2d1cbe589f0376289b65a3847d6c1069cb49e8690" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:21.477971 kubelet[2321]: E1212 17:35:21.477925 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:35:21.488705 systemd[1]: Started cri-containerd-14c680da74b872c67aa421f59d63e1dc9b78a4e9640b46cf52488d9ea413e716.scope - libcontainer container 14c680da74b872c67aa421f59d63e1dc9b78a4e9640b46cf52488d9ea413e716. Dec 12 17:35:21.489944 systemd[1]: Started cri-containerd-8775069daa61c25587e21a89d6c506588678e65beb4075323962438f08d42ff2.scope - libcontainer container 8775069daa61c25587e21a89d6c506588678e65beb4075323962438f08d42ff2. Dec 12 17:35:21.493629 systemd[1]: Started cri-containerd-579486579019bb784e3a8f64f4dcd4099ea2f358923c385db5092372147ced37.scope - libcontainer container 579486579019bb784e3a8f64f4dcd4099ea2f358923c385db5092372147ced37. Dec 12 17:35:21.532875 containerd[1532]: time="2025-12-12T17:35:21.532323357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"14c680da74b872c67aa421f59d63e1dc9b78a4e9640b46cf52488d9ea413e716\"" Dec 12 17:35:21.536506 containerd[1532]: time="2025-12-12T17:35:21.536439919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df559d73eed3c7e718a893f6daad0217,Namespace:kube-system,Attempt:0,} returns sandbox id \"8775069daa61c25587e21a89d6c506588678e65beb4075323962438f08d42ff2\"" Dec 12 17:35:21.538792 containerd[1532]: time="2025-12-12T17:35:21.538679034Z" level=info msg="CreateContainer within sandbox \"14c680da74b872c67aa421f59d63e1dc9b78a4e9640b46cf52488d9ea413e716\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:35:21.544448 containerd[1532]: time="2025-12-12T17:35:21.544408204Z" level=info msg="CreateContainer within sandbox \"8775069daa61c25587e21a89d6c506588678e65beb4075323962438f08d42ff2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:35:21.545770 containerd[1532]: time="2025-12-12T17:35:21.545730037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"579486579019bb784e3a8f64f4dcd4099ea2f358923c385db5092372147ced37\"" Dec 12 17:35:21.549997 containerd[1532]: time="2025-12-12T17:35:21.549959095Z" level=info msg="Container 0578a46c413f1da752a4369a6e16c4faaf27a5736ef73b665942668b6f202a0c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:21.550595 containerd[1532]: time="2025-12-12T17:35:21.550072832Z" level=info msg="CreateContainer within sandbox \"579486579019bb784e3a8f64f4dcd4099ea2f358923c385db5092372147ced37\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:35:21.558489 containerd[1532]: time="2025-12-12T17:35:21.557776238Z" level=info msg="Container 80fa32ceee4081b3d0a8c16d7de92177112d75b64c4f4418c62fa64b29e1883a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:21.561543 containerd[1532]: time="2025-12-12T17:35:21.561443090Z" level=info msg="CreateContainer within sandbox \"14c680da74b872c67aa421f59d63e1dc9b78a4e9640b46cf52488d9ea413e716\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0578a46c413f1da752a4369a6e16c4faaf27a5736ef73b665942668b6f202a0c\"" Dec 12 17:35:21.562433 containerd[1532]: time="2025-12-12T17:35:21.562403573Z" level=info msg="StartContainer for \"0578a46c413f1da752a4369a6e16c4faaf27a5736ef73b665942668b6f202a0c\"" Dec 12 17:35:21.563774 containerd[1532]: time="2025-12-12T17:35:21.563735557Z" level=info msg="Container 566c273a8b4718e3e4af3a8fc7f4487a7131227e95579cbca1362750b3dbfd8f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:21.563890 containerd[1532]: time="2025-12-12T17:35:21.563867277Z" level=info msg="connecting to shim 0578a46c413f1da752a4369a6e16c4faaf27a5736ef73b665942668b6f202a0c" address="unix:///run/containerd/s/c3957b892c5bb7d8ea9c57dbdd83f3fe15489227d0b7a0c56eb2a15650f7e301" protocol=ttrpc version=3 Dec 12 17:35:21.570572 containerd[1532]: time="2025-12-12T17:35:21.570436119Z" level=info msg="CreateContainer within sandbox \"8775069daa61c25587e21a89d6c506588678e65beb4075323962438f08d42ff2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"80fa32ceee4081b3d0a8c16d7de92177112d75b64c4f4418c62fa64b29e1883a\"" Dec 12 17:35:21.571005 containerd[1532]: time="2025-12-12T17:35:21.570971670Z" level=info msg="StartContainer for \"80fa32ceee4081b3d0a8c16d7de92177112d75b64c4f4418c62fa64b29e1883a\"" Dec 12 17:35:21.572626 containerd[1532]: time="2025-12-12T17:35:21.572592270Z" level=info msg="connecting to shim 80fa32ceee4081b3d0a8c16d7de92177112d75b64c4f4418c62fa64b29e1883a" address="unix:///run/containerd/s/de9ef5a5b1f8ac66da7775680db60e04abea0a0cb2e807ba3a42a36e4a23830f" protocol=ttrpc version=3 Dec 12 17:35:21.576799 containerd[1532]: time="2025-12-12T17:35:21.576720981Z" level=info msg="CreateContainer within sandbox \"579486579019bb784e3a8f64f4dcd4099ea2f358923c385db5092372147ced37\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"566c273a8b4718e3e4af3a8fc7f4487a7131227e95579cbca1362750b3dbfd8f\"" Dec 12 17:35:21.577328 containerd[1532]: time="2025-12-12T17:35:21.577301691Z" level=info msg="StartContainer for \"566c273a8b4718e3e4af3a8fc7f4487a7131227e95579cbca1362750b3dbfd8f\"" Dec 12 17:35:21.579325 containerd[1532]: time="2025-12-12T17:35:21.579274090Z" level=info msg="connecting to shim 566c273a8b4718e3e4af3a8fc7f4487a7131227e95579cbca1362750b3dbfd8f" address="unix:///run/containerd/s/15e57e79b8ad6721c8da7df2d1cbe589f0376289b65a3847d6c1069cb49e8690" protocol=ttrpc version=3 Dec 12 17:35:21.587692 systemd[1]: Started cri-containerd-0578a46c413f1da752a4369a6e16c4faaf27a5736ef73b665942668b6f202a0c.scope - libcontainer container 0578a46c413f1da752a4369a6e16c4faaf27a5736ef73b665942668b6f202a0c. Dec 12 17:35:21.600712 systemd[1]: Started cri-containerd-80fa32ceee4081b3d0a8c16d7de92177112d75b64c4f4418c62fa64b29e1883a.scope - libcontainer container 80fa32ceee4081b3d0a8c16d7de92177112d75b64c4f4418c62fa64b29e1883a. Dec 12 17:35:21.603808 systemd[1]: Started cri-containerd-566c273a8b4718e3e4af3a8fc7f4487a7131227e95579cbca1362750b3dbfd8f.scope - libcontainer container 566c273a8b4718e3e4af3a8fc7f4487a7131227e95579cbca1362750b3dbfd8f. Dec 12 17:35:21.651100 containerd[1532]: time="2025-12-12T17:35:21.651032572Z" level=info msg="StartContainer for \"0578a46c413f1da752a4369a6e16c4faaf27a5736ef73b665942668b6f202a0c\" returns successfully" Dec 12 17:35:21.659945 containerd[1532]: time="2025-12-12T17:35:21.659893402Z" level=info msg="StartContainer for \"566c273a8b4718e3e4af3a8fc7f4487a7131227e95579cbca1362750b3dbfd8f\" returns successfully" Dec 12 17:35:21.663907 containerd[1532]: time="2025-12-12T17:35:21.663848750Z" level=info msg="StartContainer for \"80fa32ceee4081b3d0a8c16d7de92177112d75b64c4f4418c62fa64b29e1883a\" returns successfully" Dec 12 17:35:22.057429 kubelet[2321]: I1212 17:35:22.057134 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:35:22.521485 kubelet[2321]: E1212 17:35:22.520001 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:35:22.523018 kubelet[2321]: E1212 17:35:22.522993 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:35:22.525523 kubelet[2321]: E1212 17:35:22.525502 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:35:23.528309 kubelet[2321]: E1212 17:35:23.528274 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:35:23.529826 kubelet[2321]: E1212 17:35:23.529801 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:35:23.859724 kubelet[2321]: E1212 17:35:23.859612 2321 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:35:23.948532 kubelet[2321]: I1212 17:35:23.948488 2321 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:35:23.993483 kubelet[2321]: I1212 17:35:23.993447 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:23.998779 kubelet[2321]: E1212 17:35:23.998602 2321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:23.998779 kubelet[2321]: I1212 17:35:23.998629 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:24.000794 kubelet[2321]: E1212 17:35:24.000731 2321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:24.000794 kubelet[2321]: I1212 17:35:24.000755 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:35:24.002448 kubelet[2321]: E1212 17:35:24.002422 2321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:35:24.482134 kubelet[2321]: I1212 17:35:24.482081 2321 apiserver.go:52] "Watching apiserver" Dec 12 17:35:24.492727 kubelet[2321]: I1212 17:35:24.492677 2321 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:35:25.880869 systemd[1]: Reload requested from client PID 2609 ('systemctl') (unit session-7.scope)... Dec 12 17:35:25.880885 systemd[1]: Reloading... Dec 12 17:35:25.961569 zram_generator::config[2652]: No configuration found. Dec 12 17:35:26.132693 systemd[1]: Reloading finished in 251 ms. Dec 12 17:35:26.153199 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:35:26.178454 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:35:26.178716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:35:26.178787 systemd[1]: kubelet.service: Consumed 867ms CPU time, 123.6M memory peak. Dec 12 17:35:26.180679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:35:26.320409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:35:26.342413 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:35:26.383121 kubelet[2694]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:35:26.383121 kubelet[2694]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:35:26.383121 kubelet[2694]: I1212 17:35:26.383097 2694 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:35:26.392569 kubelet[2694]: I1212 17:35:26.392402 2694 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:35:26.392569 kubelet[2694]: I1212 17:35:26.392440 2694 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:35:26.392784 kubelet[2694]: I1212 17:35:26.392770 2694 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:35:26.393495 kubelet[2694]: I1212 17:35:26.392849 2694 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:35:26.393495 kubelet[2694]: I1212 17:35:26.393121 2694 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:35:26.394543 kubelet[2694]: I1212 17:35:26.394518 2694 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:35:26.397023 kubelet[2694]: I1212 17:35:26.396981 2694 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:35:26.400538 kubelet[2694]: I1212 17:35:26.400517 2694 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:35:26.403655 kubelet[2694]: I1212 17:35:26.403617 2694 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:35:26.403892 kubelet[2694]: I1212 17:35:26.403849 2694 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:35:26.404135 kubelet[2694]: I1212 17:35:26.403886 2694 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:35:26.404135 kubelet[2694]: I1212 17:35:26.404137 2694 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:35:26.404253 kubelet[2694]: I1212 17:35:26.404148 2694 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:35:26.404253 kubelet[2694]: I1212 17:35:26.404179 2694 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:35:26.405426 kubelet[2694]: I1212 17:35:26.405401 2694 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:35:26.405600 kubelet[2694]: I1212 17:35:26.405587 2694 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:35:26.405631 kubelet[2694]: I1212 17:35:26.405611 2694 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:35:26.405653 kubelet[2694]: I1212 17:35:26.405636 2694 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:35:26.405653 kubelet[2694]: I1212 17:35:26.405651 2694 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:35:26.411645 kubelet[2694]: I1212 17:35:26.411615 2694 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:35:26.412259 kubelet[2694]: I1212 17:35:26.412201 2694 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:35:26.412259 kubelet[2694]: I1212 17:35:26.412234 2694 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:35:26.415193 kubelet[2694]: I1212 17:35:26.415157 2694 server.go:1262] "Started kubelet" Dec 12 17:35:26.417488 kubelet[2694]: I1212 17:35:26.416218 2694 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:35:26.417488 kubelet[2694]: I1212 17:35:26.416275 2694 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:35:26.417488 kubelet[2694]: I1212 17:35:26.416498 2694 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:35:26.417488 kubelet[2694]: I1212 17:35:26.416550 2694 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:35:26.418125 kubelet[2694]: I1212 17:35:26.418096 2694 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:35:26.419064 kubelet[2694]: I1212 17:35:26.419041 2694 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:35:26.428055 kubelet[2694]: I1212 17:35:26.428012 2694 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:35:26.433646 kubelet[2694]: I1212 17:35:26.433606 2694 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:35:26.435025 kubelet[2694]: I1212 17:35:26.434926 2694 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:35:26.436008 kubelet[2694]: I1212 17:35:26.435945 2694 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:35:26.436464 kubelet[2694]: E1212 17:35:26.436433 2694 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:35:26.436676 kubelet[2694]: I1212 17:35:26.436193 2694 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:35:26.437110 kubelet[2694]: I1212 17:35:26.436958 2694 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:35:26.439414 kubelet[2694]: I1212 17:35:26.439371 2694 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:35:26.441484 kubelet[2694]: I1212 17:35:26.441423 2694 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:35:26.443344 kubelet[2694]: I1212 17:35:26.443268 2694 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:35:26.443344 kubelet[2694]: I1212 17:35:26.443310 2694 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:35:26.443344 kubelet[2694]: I1212 17:35:26.443343 2694 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:35:26.443463 kubelet[2694]: E1212 17:35:26.443389 2694 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:35:26.480795 kubelet[2694]: I1212 17:35:26.480768 2694 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:35:26.480795 kubelet[2694]: I1212 17:35:26.480786 2694 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:35:26.480951 kubelet[2694]: I1212 17:35:26.480809 2694 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:35:26.480975 kubelet[2694]: I1212 17:35:26.480952 2694 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:35:26.480975 kubelet[2694]: I1212 17:35:26.480962 2694 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:35:26.481013 kubelet[2694]: I1212 17:35:26.480979 2694 policy_none.go:49] "None policy: Start" Dec 12 17:35:26.481013 kubelet[2694]: I1212 17:35:26.480988 2694 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:35:26.481013 kubelet[2694]: I1212 17:35:26.480997 2694 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:35:26.481163 kubelet[2694]: I1212 17:35:26.481146 2694 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 17:35:26.481163 kubelet[2694]: I1212 17:35:26.481160 2694 policy_none.go:47] "Start" Dec 12 17:35:26.487106 kubelet[2694]: E1212 17:35:26.487074 2694 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:35:26.487300 kubelet[2694]: I1212 17:35:26.487284 2694 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:35:26.487345 kubelet[2694]: I1212 17:35:26.487303 2694 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:35:26.487817 kubelet[2694]: I1212 17:35:26.487800 2694 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:35:26.489248 kubelet[2694]: E1212 17:35:26.489206 2694 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:35:26.544635 kubelet[2694]: I1212 17:35:26.544593 2694 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:26.544972 kubelet[2694]: I1212 17:35:26.544766 2694 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:26.544972 kubelet[2694]: I1212 17:35:26.544674 2694 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:35:26.593242 kubelet[2694]: I1212 17:35:26.593196 2694 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:35:26.600613 kubelet[2694]: I1212 17:35:26.600579 2694 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:35:26.600741 kubelet[2694]: I1212 17:35:26.600664 2694 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:35:26.637406 kubelet[2694]: I1212 17:35:26.637285 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:26.637406 kubelet[2694]: I1212 17:35:26.637324 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df559d73eed3c7e718a893f6daad0217-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df559d73eed3c7e718a893f6daad0217\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:26.637406 kubelet[2694]: I1212 17:35:26.637359 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:26.637406 kubelet[2694]: I1212 17:35:26.637377 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:26.637992 kubelet[2694]: I1212 17:35:26.637405 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:35:26.638249 kubelet[2694]: I1212 17:35:26.638008 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df559d73eed3c7e718a893f6daad0217-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df559d73eed3c7e718a893f6daad0217\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:26.638249 kubelet[2694]: I1212 17:35:26.638025 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df559d73eed3c7e718a893f6daad0217-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df559d73eed3c7e718a893f6daad0217\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:26.638249 kubelet[2694]: I1212 17:35:26.638040 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:26.638249 kubelet[2694]: I1212 17:35:26.638056 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:35:27.406942 kubelet[2694]: I1212 17:35:27.406898 2694 apiserver.go:52] "Watching apiserver" Dec 12 17:35:27.435797 kubelet[2694]: I1212 17:35:27.435758 2694 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:35:27.462633 kubelet[2694]: I1212 17:35:27.462387 2694 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:35:27.463691 kubelet[2694]: I1212 17:35:27.463614 2694 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:27.468930 kubelet[2694]: E1212 17:35:27.468884 2694 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:35:27.469480 kubelet[2694]: E1212 17:35:27.469452 2694 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:35:27.494118 kubelet[2694]: I1212 17:35:27.493834 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.493815807 podStartE2EDuration="1.493815807s" podCreationTimestamp="2025-12-12 17:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:35:27.484946997 +0000 UTC m=+1.138320399" watchObservedRunningTime="2025-12-12 17:35:27.493815807 +0000 UTC m=+1.147189209" Dec 12 17:35:27.504886 kubelet[2694]: I1212 17:35:27.503765 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.503748656 podStartE2EDuration="1.503748656s" podCreationTimestamp="2025-12-12 17:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:35:27.494622397 +0000 UTC m=+1.147995799" watchObservedRunningTime="2025-12-12 17:35:27.503748656 +0000 UTC m=+1.157122058" Dec 12 17:35:27.515398 kubelet[2694]: I1212 17:35:27.515338 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5153201250000001 podStartE2EDuration="1.515320125s" podCreationTimestamp="2025-12-12 17:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:35:27.503745375 +0000 UTC m=+1.157118737" watchObservedRunningTime="2025-12-12 17:35:27.515320125 +0000 UTC m=+1.168693527" Dec 12 17:35:32.242336 kubelet[2694]: I1212 17:35:32.242304 2694 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:35:32.243102 containerd[1532]: time="2025-12-12T17:35:32.243045997Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:35:32.243328 kubelet[2694]: I1212 17:35:32.243285 2694 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:35:32.823497 systemd[1]: Created slice kubepods-besteffort-pod4ef5d21f_355f_4939_bbf6_8198c6ac917a.slice - libcontainer container kubepods-besteffort-pod4ef5d21f_355f_4939_bbf6_8198c6ac917a.slice. Dec 12 17:35:32.877195 kubelet[2694]: I1212 17:35:32.877140 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78jf8\" (UniqueName: \"kubernetes.io/projected/4ef5d21f-355f-4939-bbf6-8198c6ac917a-kube-api-access-78jf8\") pod \"kube-proxy-pwmqq\" (UID: \"4ef5d21f-355f-4939-bbf6-8198c6ac917a\") " pod="kube-system/kube-proxy-pwmqq" Dec 12 17:35:32.877335 kubelet[2694]: I1212 17:35:32.877187 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ef5d21f-355f-4939-bbf6-8198c6ac917a-kube-proxy\") pod \"kube-proxy-pwmqq\" (UID: \"4ef5d21f-355f-4939-bbf6-8198c6ac917a\") " pod="kube-system/kube-proxy-pwmqq" Dec 12 17:35:32.877335 kubelet[2694]: I1212 17:35:32.877246 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ef5d21f-355f-4939-bbf6-8198c6ac917a-xtables-lock\") pod \"kube-proxy-pwmqq\" (UID: \"4ef5d21f-355f-4939-bbf6-8198c6ac917a\") " pod="kube-system/kube-proxy-pwmqq" Dec 12 17:35:32.877335 kubelet[2694]: I1212 17:35:32.877269 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ef5d21f-355f-4939-bbf6-8198c6ac917a-lib-modules\") pod \"kube-proxy-pwmqq\" (UID: \"4ef5d21f-355f-4939-bbf6-8198c6ac917a\") " pod="kube-system/kube-proxy-pwmqq" Dec 12 17:35:33.139811 containerd[1532]: time="2025-12-12T17:35:33.139707652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwmqq,Uid:4ef5d21f-355f-4939-bbf6-8198c6ac917a,Namespace:kube-system,Attempt:0,}" Dec 12 17:35:33.168109 containerd[1532]: time="2025-12-12T17:35:33.168067084Z" level=info msg="connecting to shim 64e1e453acb0f600459d6cd8b9cd2d6110bf874708e2e5d41bd840e6254b6202" address="unix:///run/containerd/s/ea8a673e6d406e750c06de14d35a67d42b3bc9f5130b201644a159c3dd0df4c6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:33.192830 systemd[1]: Started cri-containerd-64e1e453acb0f600459d6cd8b9cd2d6110bf874708e2e5d41bd840e6254b6202.scope - libcontainer container 64e1e453acb0f600459d6cd8b9cd2d6110bf874708e2e5d41bd840e6254b6202. Dec 12 17:35:33.238112 containerd[1532]: time="2025-12-12T17:35:33.238072943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwmqq,Uid:4ef5d21f-355f-4939-bbf6-8198c6ac917a,Namespace:kube-system,Attempt:0,} returns sandbox id \"64e1e453acb0f600459d6cd8b9cd2d6110bf874708e2e5d41bd840e6254b6202\"" Dec 12 17:35:33.244872 containerd[1532]: time="2025-12-12T17:35:33.244817082Z" level=info msg="CreateContainer within sandbox \"64e1e453acb0f600459d6cd8b9cd2d6110bf874708e2e5d41bd840e6254b6202\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:35:33.258953 containerd[1532]: time="2025-12-12T17:35:33.257826187Z" level=info msg="Container 00c063c03a8961c700218d1d0a4cba4dd96f45fafb61f2bfc63cea8a8dc9200a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:33.271633 containerd[1532]: time="2025-12-12T17:35:33.271559912Z" level=info msg="CreateContainer within sandbox \"64e1e453acb0f600459d6cd8b9cd2d6110bf874708e2e5d41bd840e6254b6202\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00c063c03a8961c700218d1d0a4cba4dd96f45fafb61f2bfc63cea8a8dc9200a\"" Dec 12 17:35:33.272142 containerd[1532]: time="2025-12-12T17:35:33.272086006Z" level=info msg="StartContainer for \"00c063c03a8961c700218d1d0a4cba4dd96f45fafb61f2bfc63cea8a8dc9200a\"" Dec 12 17:35:33.273743 containerd[1532]: time="2025-12-12T17:35:33.273716889Z" level=info msg="connecting to shim 00c063c03a8961c700218d1d0a4cba4dd96f45fafb61f2bfc63cea8a8dc9200a" address="unix:///run/containerd/s/ea8a673e6d406e750c06de14d35a67d42b3bc9f5130b201644a159c3dd0df4c6" protocol=ttrpc version=3 Dec 12 17:35:33.292723 systemd[1]: Started cri-containerd-00c063c03a8961c700218d1d0a4cba4dd96f45fafb61f2bfc63cea8a8dc9200a.scope - libcontainer container 00c063c03a8961c700218d1d0a4cba4dd96f45fafb61f2bfc63cea8a8dc9200a. Dec 12 17:35:33.387285 containerd[1532]: time="2025-12-12T17:35:33.387242863Z" level=info msg="StartContainer for \"00c063c03a8961c700218d1d0a4cba4dd96f45fafb61f2bfc63cea8a8dc9200a\" returns successfully" Dec 12 17:35:33.451376 systemd[1]: Created slice kubepods-besteffort-podb6537b79_3d86_403d_bdf6_fce9ff42659e.slice - libcontainer container kubepods-besteffort-podb6537b79_3d86_403d_bdf6_fce9ff42659e.slice. Dec 12 17:35:33.480644 kubelet[2694]: I1212 17:35:33.480605 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6537b79-3d86-403d-bdf6-fce9ff42659e-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-4nwtt\" (UID: \"b6537b79-3d86-403d-bdf6-fce9ff42659e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4nwtt" Dec 12 17:35:33.481180 kubelet[2694]: I1212 17:35:33.480653 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5vl\" (UniqueName: \"kubernetes.io/projected/b6537b79-3d86-403d-bdf6-fce9ff42659e-kube-api-access-bd5vl\") pod \"tigera-operator-65cdcdfd6d-4nwtt\" (UID: \"b6537b79-3d86-403d-bdf6-fce9ff42659e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-4nwtt" Dec 12 17:35:33.483163 kubelet[2694]: I1212 17:35:33.483071 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pwmqq" podStartSLOduration=1.483053806 podStartE2EDuration="1.483053806s" podCreationTimestamp="2025-12-12 17:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:35:33.482531792 +0000 UTC m=+7.135905194" watchObservedRunningTime="2025-12-12 17:35:33.483053806 +0000 UTC m=+7.136427208" Dec 12 17:35:33.758295 containerd[1532]: time="2025-12-12T17:35:33.758231191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4nwtt,Uid:b6537b79-3d86-403d-bdf6-fce9ff42659e,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:35:33.775475 containerd[1532]: time="2025-12-12T17:35:33.775422247Z" level=info msg="connecting to shim 4a3f05105f0e794695369a3056c50208b07c1bc9e3a6dd2b956633b84dcd5dbc" address="unix:///run/containerd/s/446455edeb06a5044b763c6b6ed7ebc529faf958209291d0acb29391fae86f72" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:33.799769 systemd[1]: Started cri-containerd-4a3f05105f0e794695369a3056c50208b07c1bc9e3a6dd2b956633b84dcd5dbc.scope - libcontainer container 4a3f05105f0e794695369a3056c50208b07c1bc9e3a6dd2b956633b84dcd5dbc. Dec 12 17:35:33.834761 containerd[1532]: time="2025-12-12T17:35:33.834715421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-4nwtt,Uid:b6537b79-3d86-403d-bdf6-fce9ff42659e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4a3f05105f0e794695369a3056c50208b07c1bc9e3a6dd2b956633b84dcd5dbc\"" Dec 12 17:35:33.837496 containerd[1532]: time="2025-12-12T17:35:33.836453987Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:35:33.991976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311012428.mount: Deactivated successfully. Dec 12 17:35:35.309225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509792898.mount: Deactivated successfully. Dec 12 17:35:35.947243 containerd[1532]: time="2025-12-12T17:35:35.947187969Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:35.947980 containerd[1532]: time="2025-12-12T17:35:35.947959827Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 12 17:35:35.949271 containerd[1532]: time="2025-12-12T17:35:35.949243178Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:35.951273 containerd[1532]: time="2025-12-12T17:35:35.951223425Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:35.952019 containerd[1532]: time="2025-12-12T17:35:35.951905241Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.115400653s" Dec 12 17:35:35.952019 containerd[1532]: time="2025-12-12T17:35:35.951938322Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:35:35.956899 containerd[1532]: time="2025-12-12T17:35:35.956869960Z" level=info msg="CreateContainer within sandbox \"4a3f05105f0e794695369a3056c50208b07c1bc9e3a6dd2b956633b84dcd5dbc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:35:35.962393 containerd[1532]: time="2025-12-12T17:35:35.962368011Z" level=info msg="Container f4f71f3cbaf70786425b209d719a1f0fa3cf64d0ca893f5e68227b17a8a74b6c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:35.968685 containerd[1532]: time="2025-12-12T17:35:35.968649040Z" level=info msg="CreateContainer within sandbox \"4a3f05105f0e794695369a3056c50208b07c1bc9e3a6dd2b956633b84dcd5dbc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f4f71f3cbaf70786425b209d719a1f0fa3cf64d0ca893f5e68227b17a8a74b6c\"" Dec 12 17:35:35.969316 containerd[1532]: time="2025-12-12T17:35:35.969290496Z" level=info msg="StartContainer for \"f4f71f3cbaf70786425b209d719a1f0fa3cf64d0ca893f5e68227b17a8a74b6c\"" Dec 12 17:35:35.970094 containerd[1532]: time="2025-12-12T17:35:35.970070154Z" level=info msg="connecting to shim f4f71f3cbaf70786425b209d719a1f0fa3cf64d0ca893f5e68227b17a8a74b6c" address="unix:///run/containerd/s/446455edeb06a5044b763c6b6ed7ebc529faf958209291d0acb29391fae86f72" protocol=ttrpc version=3 Dec 12 17:35:35.993684 systemd[1]: Started cri-containerd-f4f71f3cbaf70786425b209d719a1f0fa3cf64d0ca893f5e68227b17a8a74b6c.scope - libcontainer container f4f71f3cbaf70786425b209d719a1f0fa3cf64d0ca893f5e68227b17a8a74b6c. Dec 12 17:35:36.021396 containerd[1532]: time="2025-12-12T17:35:36.021326871Z" level=info msg="StartContainer for \"f4f71f3cbaf70786425b209d719a1f0fa3cf64d0ca893f5e68227b17a8a74b6c\" returns successfully" Dec 12 17:35:36.498744 kubelet[2694]: I1212 17:35:36.498571 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-4nwtt" podStartSLOduration=1.381989375 podStartE2EDuration="3.498554857s" podCreationTimestamp="2025-12-12 17:35:33 +0000 UTC" firstStartedPulling="2025-12-12 17:35:33.836152339 +0000 UTC m=+7.489525741" lastFinishedPulling="2025-12-12 17:35:35.952717861 +0000 UTC m=+9.606091223" observedRunningTime="2025-12-12 17:35:36.498550617 +0000 UTC m=+10.151923979" watchObservedRunningTime="2025-12-12 17:35:36.498554857 +0000 UTC m=+10.151928259" Dec 12 17:35:39.426023 update_engine[1522]: I20251212 17:35:39.425925 1522 update_attempter.cc:509] Updating boot flags... Dec 12 17:35:41.329244 sudo[1748]: pam_unix(sudo:session): session closed for user root Dec 12 17:35:41.331391 sshd[1747]: Connection closed by 10.0.0.1 port 53250 Dec 12 17:35:41.331897 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 12 17:35:41.335830 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:53250.service: Deactivated successfully. Dec 12 17:35:41.338185 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:35:41.340629 systemd[1]: session-7.scope: Consumed 6.658s CPU time, 222M memory peak. Dec 12 17:35:41.342118 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:35:41.343268 systemd-logind[1517]: Removed session 7. Dec 12 17:35:49.847656 systemd[1]: Created slice kubepods-besteffort-podcaec8d3c_a04f_49a0_b951_b1e29eac8d72.slice - libcontainer container kubepods-besteffort-podcaec8d3c_a04f_49a0_b951_b1e29eac8d72.slice. Dec 12 17:35:49.890276 kubelet[2694]: I1212 17:35:49.890219 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmt4x\" (UniqueName: \"kubernetes.io/projected/caec8d3c-a04f-49a0-b951-b1e29eac8d72-kube-api-access-hmt4x\") pod \"calico-typha-89c5487cd-v987q\" (UID: \"caec8d3c-a04f-49a0-b951-b1e29eac8d72\") " pod="calico-system/calico-typha-89c5487cd-v987q" Dec 12 17:35:49.890276 kubelet[2694]: I1212 17:35:49.890271 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caec8d3c-a04f-49a0-b951-b1e29eac8d72-tigera-ca-bundle\") pod \"calico-typha-89c5487cd-v987q\" (UID: \"caec8d3c-a04f-49a0-b951-b1e29eac8d72\") " pod="calico-system/calico-typha-89c5487cd-v987q" Dec 12 17:35:49.890708 kubelet[2694]: I1212 17:35:49.890288 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/caec8d3c-a04f-49a0-b951-b1e29eac8d72-typha-certs\") pod \"calico-typha-89c5487cd-v987q\" (UID: \"caec8d3c-a04f-49a0-b951-b1e29eac8d72\") " pod="calico-system/calico-typha-89c5487cd-v987q" Dec 12 17:35:50.034652 systemd[1]: Created slice kubepods-besteffort-pod8f3e1c87_bd20_44d6_ae6d_dadef0756417.slice - libcontainer container kubepods-besteffort-pod8f3e1c87_bd20_44d6_ae6d_dadef0756417.slice. Dec 12 17:35:50.091971 kubelet[2694]: I1212 17:35:50.091925 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8f3e1c87-bd20-44d6-ae6d-dadef0756417-cni-log-dir\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.091971 kubelet[2694]: I1212 17:35:50.091971 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brq7p\" (UniqueName: \"kubernetes.io/projected/8f3e1c87-bd20-44d6-ae6d-dadef0756417-kube-api-access-brq7p\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092135 kubelet[2694]: I1212 17:35:50.091991 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f3e1c87-bd20-44d6-ae6d-dadef0756417-tigera-ca-bundle\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092135 kubelet[2694]: I1212 17:35:50.092006 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f3e1c87-bd20-44d6-ae6d-dadef0756417-var-lib-calico\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092135 kubelet[2694]: I1212 17:35:50.092066 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f3e1c87-bd20-44d6-ae6d-dadef0756417-xtables-lock\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092135 kubelet[2694]: I1212 17:35:50.092098 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8f3e1c87-bd20-44d6-ae6d-dadef0756417-node-certs\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092230 kubelet[2694]: I1212 17:35:50.092151 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8f3e1c87-bd20-44d6-ae6d-dadef0756417-cni-net-dir\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092230 kubelet[2694]: I1212 17:35:50.092197 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8f3e1c87-bd20-44d6-ae6d-dadef0756417-cni-bin-dir\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092230 kubelet[2694]: I1212 17:35:50.092219 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8f3e1c87-bd20-44d6-ae6d-dadef0756417-flexvol-driver-host\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092295 kubelet[2694]: I1212 17:35:50.092255 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8f3e1c87-bd20-44d6-ae6d-dadef0756417-var-run-calico\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092295 kubelet[2694]: I1212 17:35:50.092273 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8f3e1c87-bd20-44d6-ae6d-dadef0756417-policysync\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.092338 kubelet[2694]: I1212 17:35:50.092314 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f3e1c87-bd20-44d6-ae6d-dadef0756417-lib-modules\") pod \"calico-node-rvbcv\" (UID: \"8f3e1c87-bd20-44d6-ae6d-dadef0756417\") " pod="calico-system/calico-node-rvbcv" Dec 12 17:35:50.157828 kubelet[2694]: E1212 17:35:50.157633 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:50.158354 containerd[1532]: time="2025-12-12T17:35:50.158314888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-89c5487cd-v987q,Uid:caec8d3c-a04f-49a0-b951-b1e29eac8d72,Namespace:calico-system,Attempt:0,}" Dec 12 17:35:50.205428 kubelet[2694]: E1212 17:35:50.205266 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.205428 kubelet[2694]: W1212 17:35:50.205305 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.205428 kubelet[2694]: E1212 17:35:50.205333 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.210582 containerd[1532]: time="2025-12-12T17:35:50.210063129Z" level=info msg="connecting to shim f5ffb1f0f0a255c539a01e4abbf715b83d58b4492ddaa33a8f27852532b57e8e" address="unix:///run/containerd/s/6f42dec94cd43dbad8f849f47d2e52849eeb8c7fe1b030a3f160622c6c3fa430" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:50.242252 kubelet[2694]: E1212 17:35:50.241593 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.242252 kubelet[2694]: W1212 17:35:50.241783 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.242252 kubelet[2694]: E1212 17:35:50.241822 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.249112 kubelet[2694]: E1212 17:35:50.249041 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pvz6g" podUID="32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75" Dec 12 17:35:50.281221 kubelet[2694]: E1212 17:35:50.281155 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.281221 kubelet[2694]: W1212 17:35:50.281180 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.281221 kubelet[2694]: E1212 17:35:50.281203 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.282500 kubelet[2694]: E1212 17:35:50.282049 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.282500 kubelet[2694]: W1212 17:35:50.282070 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.282500 kubelet[2694]: E1212 17:35:50.282123 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.283600 kubelet[2694]: E1212 17:35:50.283565 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.283600 kubelet[2694]: W1212 17:35:50.283589 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.283600 kubelet[2694]: E1212 17:35:50.283607 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.283893 kubelet[2694]: E1212 17:35:50.283870 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.283893 kubelet[2694]: W1212 17:35:50.283883 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.283893 kubelet[2694]: E1212 17:35:50.283894 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.285588 kubelet[2694]: E1212 17:35:50.285556 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.285588 kubelet[2694]: W1212 17:35:50.285577 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.285710 kubelet[2694]: E1212 17:35:50.285594 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.286296 kubelet[2694]: E1212 17:35:50.285812 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.286296 kubelet[2694]: W1212 17:35:50.285825 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.286296 kubelet[2694]: E1212 17:35:50.285836 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.287285 kubelet[2694]: E1212 17:35:50.287251 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.287285 kubelet[2694]: W1212 17:35:50.287271 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.287285 kubelet[2694]: E1212 17:35:50.287285 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.287946 kubelet[2694]: E1212 17:35:50.287551 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.287946 kubelet[2694]: W1212 17:35:50.287564 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.287946 kubelet[2694]: E1212 17:35:50.287577 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.288573 kubelet[2694]: E1212 17:35:50.288544 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.288573 kubelet[2694]: W1212 17:35:50.288560 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.288573 kubelet[2694]: E1212 17:35:50.288577 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.289591 kubelet[2694]: E1212 17:35:50.289564 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.289591 kubelet[2694]: W1212 17:35:50.289581 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.289591 kubelet[2694]: E1212 17:35:50.289594 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.289847 kubelet[2694]: E1212 17:35:50.289796 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.289847 kubelet[2694]: W1212 17:35:50.289820 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.289847 kubelet[2694]: E1212 17:35:50.289832 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.290983 kubelet[2694]: E1212 17:35:50.290044 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.290983 kubelet[2694]: W1212 17:35:50.290054 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.290983 kubelet[2694]: E1212 17:35:50.290066 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.290983 kubelet[2694]: E1212 17:35:50.290526 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.290983 kubelet[2694]: W1212 17:35:50.290537 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.290983 kubelet[2694]: E1212 17:35:50.290549 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.292629 kubelet[2694]: E1212 17:35:50.292599 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.292629 kubelet[2694]: W1212 17:35:50.292615 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.292629 kubelet[2694]: E1212 17:35:50.292629 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.292847 kubelet[2694]: E1212 17:35:50.292828 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.292847 kubelet[2694]: W1212 17:35:50.292841 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.292905 kubelet[2694]: E1212 17:35:50.292850 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.293025 kubelet[2694]: E1212 17:35:50.293006 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.293025 kubelet[2694]: W1212 17:35:50.293018 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.293123 kubelet[2694]: E1212 17:35:50.293027 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.293226 kubelet[2694]: E1212 17:35:50.293206 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.293226 kubelet[2694]: W1212 17:35:50.293219 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.293273 kubelet[2694]: E1212 17:35:50.293229 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.293586 kubelet[2694]: E1212 17:35:50.293562 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.293586 kubelet[2694]: W1212 17:35:50.293577 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.293709 kubelet[2694]: E1212 17:35:50.293590 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.294049 kubelet[2694]: E1212 17:35:50.294018 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.294049 kubelet[2694]: W1212 17:35:50.294035 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.294049 kubelet[2694]: E1212 17:35:50.294047 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.295452 kubelet[2694]: E1212 17:35:50.295427 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.295452 kubelet[2694]: W1212 17:35:50.295445 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.295616 kubelet[2694]: E1212 17:35:50.295460 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.297926 kubelet[2694]: E1212 17:35:50.297903 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.297926 kubelet[2694]: W1212 17:35:50.297922 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.298276 kubelet[2694]: E1212 17:35:50.297936 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.298276 kubelet[2694]: I1212 17:35:50.297963 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvt9m\" (UniqueName: \"kubernetes.io/projected/32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75-kube-api-access-pvt9m\") pod \"csi-node-driver-pvz6g\" (UID: \"32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75\") " pod="calico-system/csi-node-driver-pvz6g" Dec 12 17:35:50.298276 kubelet[2694]: E1212 17:35:50.298182 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.298276 kubelet[2694]: W1212 17:35:50.298193 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.298276 kubelet[2694]: E1212 17:35:50.298202 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.298276 kubelet[2694]: I1212 17:35:50.298223 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75-kubelet-dir\") pod \"csi-node-driver-pvz6g\" (UID: \"32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75\") " pod="calico-system/csi-node-driver-pvz6g" Dec 12 17:35:50.299606 kubelet[2694]: E1212 17:35:50.298575 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.299606 kubelet[2694]: W1212 17:35:50.298588 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.299606 kubelet[2694]: E1212 17:35:50.298598 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.299606 kubelet[2694]: I1212 17:35:50.298613 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75-socket-dir\") pod \"csi-node-driver-pvz6g\" (UID: \"32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75\") " pod="calico-system/csi-node-driver-pvz6g" Dec 12 17:35:50.299916 kubelet[2694]: E1212 17:35:50.299894 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.299916 kubelet[2694]: W1212 17:35:50.299912 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.299992 kubelet[2694]: E1212 17:35:50.299927 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.300105 kubelet[2694]: I1212 17:35:50.300085 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75-registration-dir\") pod \"csi-node-driver-pvz6g\" (UID: \"32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75\") " pod="calico-system/csi-node-driver-pvz6g" Dec 12 17:35:50.300183 kubelet[2694]: E1212 17:35:50.300115 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.300288 kubelet[2694]: W1212 17:35:50.300226 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.300355 kubelet[2694]: E1212 17:35:50.300343 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.301590 kubelet[2694]: E1212 17:35:50.301568 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.301829 kubelet[2694]: W1212 17:35:50.301678 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.301829 kubelet[2694]: E1212 17:35:50.301701 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.303756 kubelet[2694]: E1212 17:35:50.303734 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.303856 kubelet[2694]: W1212 17:35:50.303841 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.303945 kubelet[2694]: E1212 17:35:50.303933 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.304354 kubelet[2694]: E1212 17:35:50.304338 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.304656 kubelet[2694]: W1212 17:35:50.304419 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.304656 kubelet[2694]: E1212 17:35:50.304436 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.304898 kubelet[2694]: E1212 17:35:50.304870 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.305139 kubelet[2694]: W1212 17:35:50.305118 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.305325 kubelet[2694]: E1212 17:35:50.305239 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.306083 kubelet[2694]: I1212 17:35:50.305414 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75-varrun\") pod \"csi-node-driver-pvz6g\" (UID: \"32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75\") " pod="calico-system/csi-node-driver-pvz6g" Dec 12 17:35:50.306502 kubelet[2694]: E1212 17:35:50.306485 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.306776 kubelet[2694]: W1212 17:35:50.306560 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.306776 kubelet[2694]: E1212 17:35:50.306629 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.307259 kubelet[2694]: E1212 17:35:50.307241 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.307553 kubelet[2694]: W1212 17:35:50.307499 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.307881 kubelet[2694]: E1212 17:35:50.307861 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.309210 kubelet[2694]: E1212 17:35:50.309188 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.310244 kubelet[2694]: W1212 17:35:50.309286 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.310244 kubelet[2694]: E1212 17:35:50.309306 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.310819 kubelet[2694]: E1212 17:35:50.310677 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.310819 kubelet[2694]: W1212 17:35:50.310697 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.310819 kubelet[2694]: E1212 17:35:50.310712 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.311653 kubelet[2694]: E1212 17:35:50.311633 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.311780 kubelet[2694]: W1212 17:35:50.311757 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.312090 kubelet[2694]: E1212 17:35:50.312070 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.312870 kubelet[2694]: E1212 17:35:50.312776 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.312870 kubelet[2694]: W1212 17:35:50.312791 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.312870 kubelet[2694]: E1212 17:35:50.312819 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.324661 systemd[1]: Started cri-containerd-f5ffb1f0f0a255c539a01e4abbf715b83d58b4492ddaa33a8f27852532b57e8e.scope - libcontainer container f5ffb1f0f0a255c539a01e4abbf715b83d58b4492ddaa33a8f27852532b57e8e. Dec 12 17:35:50.339736 kubelet[2694]: E1212 17:35:50.339701 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:50.340330 containerd[1532]: time="2025-12-12T17:35:50.340294241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvbcv,Uid:8f3e1c87-bd20-44d6-ae6d-dadef0756417,Namespace:calico-system,Attempt:0,}" Dec 12 17:35:50.362940 containerd[1532]: time="2025-12-12T17:35:50.362893543Z" level=info msg="connecting to shim 9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5" address="unix:///run/containerd/s/18482a5357fec30efd3067b20609478c2b51eb6535e0ff92e059c6ae9737d732" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:35:50.390688 systemd[1]: Started cri-containerd-9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5.scope - libcontainer container 9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5. Dec 12 17:35:50.414209 kubelet[2694]: E1212 17:35:50.413707 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.418112 kubelet[2694]: W1212 17:35:50.417514 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.425538 kubelet[2694]: E1212 17:35:50.424867 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.426397 kubelet[2694]: E1212 17:35:50.426350 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.426526 kubelet[2694]: W1212 17:35:50.426508 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.426599 kubelet[2694]: E1212 17:35:50.426587 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.426913 kubelet[2694]: E1212 17:35:50.426894 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.426913 kubelet[2694]: W1212 17:35:50.426912 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.427031 kubelet[2694]: E1212 17:35:50.426926 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.427711 kubelet[2694]: E1212 17:35:50.427694 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.427711 kubelet[2694]: W1212 17:35:50.427709 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.427884 kubelet[2694]: E1212 17:35:50.427722 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.427995 kubelet[2694]: E1212 17:35:50.427981 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.428028 kubelet[2694]: W1212 17:35:50.427995 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.428028 kubelet[2694]: E1212 17:35:50.428006 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.428200 kubelet[2694]: E1212 17:35:50.428187 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.428200 kubelet[2694]: W1212 17:35:50.428199 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.428264 kubelet[2694]: E1212 17:35:50.428209 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.428458 kubelet[2694]: E1212 17:35:50.428443 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.428458 kubelet[2694]: W1212 17:35:50.428456 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.428686 kubelet[2694]: E1212 17:35:50.428664 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.435366 kubelet[2694]: E1212 17:35:50.434027 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.435366 kubelet[2694]: W1212 17:35:50.434051 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.435366 kubelet[2694]: E1212 17:35:50.434073 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.435366 kubelet[2694]: E1212 17:35:50.434577 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.435366 kubelet[2694]: W1212 17:35:50.434591 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.435366 kubelet[2694]: E1212 17:35:50.434605 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.435653 kubelet[2694]: E1212 17:35:50.435431 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.435653 kubelet[2694]: W1212 17:35:50.435444 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.435653 kubelet[2694]: E1212 17:35:50.435457 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.435653 kubelet[2694]: E1212 17:35:50.435623 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.435653 kubelet[2694]: W1212 17:35:50.435632 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.435653 kubelet[2694]: E1212 17:35:50.435654 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.436033 kubelet[2694]: E1212 17:35:50.435941 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.436033 kubelet[2694]: W1212 17:35:50.435956 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.436287 kubelet[2694]: E1212 17:35:50.435968 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.436443 kubelet[2694]: E1212 17:35:50.436422 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.436443 kubelet[2694]: W1212 17:35:50.436438 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.436544 kubelet[2694]: E1212 17:35:50.436451 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.437553 kubelet[2694]: E1212 17:35:50.437003 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.437553 kubelet[2694]: W1212 17:35:50.437550 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.437648 kubelet[2694]: E1212 17:35:50.437570 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.438029 kubelet[2694]: E1212 17:35:50.438007 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.438029 kubelet[2694]: W1212 17:35:50.438026 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.438098 kubelet[2694]: E1212 17:35:50.438039 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.438537 kubelet[2694]: E1212 17:35:50.438392 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.438537 kubelet[2694]: W1212 17:35:50.438411 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.438537 kubelet[2694]: E1212 17:35:50.438423 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.438817 kubelet[2694]: E1212 17:35:50.438664 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.438817 kubelet[2694]: W1212 17:35:50.438676 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.438817 kubelet[2694]: E1212 17:35:50.438691 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.438990 kubelet[2694]: E1212 17:35:50.438892 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.438990 kubelet[2694]: W1212 17:35:50.438900 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.438990 kubelet[2694]: E1212 17:35:50.438909 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.439266 kubelet[2694]: E1212 17:35:50.439226 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.439266 kubelet[2694]: W1212 17:35:50.439238 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.439266 kubelet[2694]: E1212 17:35:50.439247 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.440076 kubelet[2694]: E1212 17:35:50.439521 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.440076 kubelet[2694]: W1212 17:35:50.439534 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.440076 kubelet[2694]: E1212 17:35:50.439543 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.440076 kubelet[2694]: E1212 17:35:50.439776 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.440076 kubelet[2694]: W1212 17:35:50.439785 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.440076 kubelet[2694]: E1212 17:35:50.439793 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.440076 kubelet[2694]: E1212 17:35:50.440049 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.440076 kubelet[2694]: W1212 17:35:50.440058 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.440076 kubelet[2694]: E1212 17:35:50.440067 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.440432 kubelet[2694]: E1212 17:35:50.440418 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.440432 kubelet[2694]: W1212 17:35:50.440431 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.440564 kubelet[2694]: E1212 17:35:50.440441 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.440914 kubelet[2694]: E1212 17:35:50.440897 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.440914 kubelet[2694]: W1212 17:35:50.440912 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.440980 kubelet[2694]: E1212 17:35:50.440923 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.459205 kubelet[2694]: E1212 17:35:50.459157 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.459205 kubelet[2694]: W1212 17:35:50.459180 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.460244 kubelet[2694]: E1212 17:35:50.459643 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:50.460334 containerd[1532]: time="2025-12-12T17:35:50.460226513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-89c5487cd-v987q,Uid:caec8d3c-a04f-49a0-b951-b1e29eac8d72,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5ffb1f0f0a255c539a01e4abbf715b83d58b4492ddaa33a8f27852532b57e8e\"" Dec 12 17:35:50.462176 kubelet[2694]: E1212 17:35:50.461751 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:50.462579 containerd[1532]: time="2025-12-12T17:35:50.462545620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvbcv,Uid:8f3e1c87-bd20-44d6-ae6d-dadef0756417,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5\"" Dec 12 17:35:50.464036 kubelet[2694]: E1212 17:35:50.463987 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:50.465509 containerd[1532]: time="2025-12-12T17:35:50.464819887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:35:50.473259 kubelet[2694]: E1212 17:35:50.473230 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:50.473259 kubelet[2694]: W1212 17:35:50.473250 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:50.473401 kubelet[2694]: E1212 17:35:50.473278 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:51.459040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445603709.mount: Deactivated successfully. Dec 12 17:35:52.444942 kubelet[2694]: E1212 17:35:52.444889 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pvz6g" podUID="32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75" Dec 12 17:35:52.741715 containerd[1532]: time="2025-12-12T17:35:52.741649185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:52.742309 containerd[1532]: time="2025-12-12T17:35:52.742273272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 12 17:35:52.743163 containerd[1532]: time="2025-12-12T17:35:52.743124481Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:52.745129 containerd[1532]: time="2025-12-12T17:35:52.745102662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:52.745784 containerd[1532]: time="2025-12-12T17:35:52.745686709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.280828861s" Dec 12 17:35:52.745784 containerd[1532]: time="2025-12-12T17:35:52.745725469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:35:52.746843 containerd[1532]: time="2025-12-12T17:35:52.746802440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:35:52.757415 containerd[1532]: time="2025-12-12T17:35:52.757321233Z" level=info msg="CreateContainer within sandbox \"f5ffb1f0f0a255c539a01e4abbf715b83d58b4492ddaa33a8f27852532b57e8e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:35:52.772571 containerd[1532]: time="2025-12-12T17:35:52.772042911Z" level=info msg="Container 2edd7cb99860e4579590693ac4ac31e1cfdfbb05c63ac6ad0caafcaf2425b7e3: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:52.785230 containerd[1532]: time="2025-12-12T17:35:52.785147451Z" level=info msg="CreateContainer within sandbox \"f5ffb1f0f0a255c539a01e4abbf715b83d58b4492ddaa33a8f27852532b57e8e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2edd7cb99860e4579590693ac4ac31e1cfdfbb05c63ac6ad0caafcaf2425b7e3\"" Dec 12 17:35:52.785747 containerd[1532]: time="2025-12-12T17:35:52.785717097Z" level=info msg="StartContainer for \"2edd7cb99860e4579590693ac4ac31e1cfdfbb05c63ac6ad0caafcaf2425b7e3\"" Dec 12 17:35:52.788623 containerd[1532]: time="2025-12-12T17:35:52.788542327Z" level=info msg="connecting to shim 2edd7cb99860e4579590693ac4ac31e1cfdfbb05c63ac6ad0caafcaf2425b7e3" address="unix:///run/containerd/s/6f42dec94cd43dbad8f849f47d2e52849eeb8c7fe1b030a3f160622c6c3fa430" protocol=ttrpc version=3 Dec 12 17:35:52.810690 systemd[1]: Started cri-containerd-2edd7cb99860e4579590693ac4ac31e1cfdfbb05c63ac6ad0caafcaf2425b7e3.scope - libcontainer container 2edd7cb99860e4579590693ac4ac31e1cfdfbb05c63ac6ad0caafcaf2425b7e3. Dec 12 17:35:52.850277 containerd[1532]: time="2025-12-12T17:35:52.850224388Z" level=info msg="StartContainer for \"2edd7cb99860e4579590693ac4ac31e1cfdfbb05c63ac6ad0caafcaf2425b7e3\" returns successfully" Dec 12 17:35:53.525225 kubelet[2694]: E1212 17:35:53.525194 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:53.538441 kubelet[2694]: I1212 17:35:53.538236 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-89c5487cd-v987q" podStartSLOduration=2.255385968 podStartE2EDuration="4.538215572s" podCreationTimestamp="2025-12-12 17:35:49 +0000 UTC" firstStartedPulling="2025-12-12 17:35:50.463820715 +0000 UTC m=+24.117194117" lastFinishedPulling="2025-12-12 17:35:52.746650319 +0000 UTC m=+26.400023721" observedRunningTime="2025-12-12 17:35:53.53797933 +0000 UTC m=+27.191352772" watchObservedRunningTime="2025-12-12 17:35:53.538215572 +0000 UTC m=+27.191588934" Dec 12 17:35:53.616173 kubelet[2694]: E1212 17:35:53.616087 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.616173 kubelet[2694]: W1212 17:35:53.616111 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.616173 kubelet[2694]: E1212 17:35:53.616132 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.616721 kubelet[2694]: E1212 17:35:53.616617 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.616721 kubelet[2694]: W1212 17:35:53.616632 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.616721 kubelet[2694]: E1212 17:35:53.616680 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.617088 kubelet[2694]: E1212 17:35:53.617072 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.617204 kubelet[2694]: W1212 17:35:53.617147 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.617204 kubelet[2694]: E1212 17:35:53.617164 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.617487 kubelet[2694]: E1212 17:35:53.617420 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.617487 kubelet[2694]: W1212 17:35:53.617432 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.617487 kubelet[2694]: E1212 17:35:53.617441 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.617798 kubelet[2694]: E1212 17:35:53.617744 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.617798 kubelet[2694]: W1212 17:35:53.617756 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.617798 kubelet[2694]: E1212 17:35:53.617766 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.618088 kubelet[2694]: E1212 17:35:53.618030 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.618088 kubelet[2694]: W1212 17:35:53.618042 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.618088 kubelet[2694]: E1212 17:35:53.618052 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.618369 kubelet[2694]: E1212 17:35:53.618308 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.618369 kubelet[2694]: W1212 17:35:53.618322 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.618369 kubelet[2694]: E1212 17:35:53.618331 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.618671 kubelet[2694]: E1212 17:35:53.618605 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.618671 kubelet[2694]: W1212 17:35:53.618617 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.618671 kubelet[2694]: E1212 17:35:53.618626 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.618971 kubelet[2694]: E1212 17:35:53.618910 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.618971 kubelet[2694]: W1212 17:35:53.618923 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.618971 kubelet[2694]: E1212 17:35:53.618933 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.619243 kubelet[2694]: E1212 17:35:53.619187 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.619243 kubelet[2694]: W1212 17:35:53.619199 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.619243 kubelet[2694]: E1212 17:35:53.619208 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.619551 kubelet[2694]: E1212 17:35:53.619488 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.619551 kubelet[2694]: W1212 17:35:53.619502 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.619551 kubelet[2694]: E1212 17:35:53.619513 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.619953 kubelet[2694]: E1212 17:35:53.619883 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.619953 kubelet[2694]: W1212 17:35:53.619897 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.619953 kubelet[2694]: E1212 17:35:53.619907 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.620210 kubelet[2694]: E1212 17:35:53.620199 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.620272 kubelet[2694]: W1212 17:35:53.620261 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.620318 kubelet[2694]: E1212 17:35:53.620308 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.620535 kubelet[2694]: E1212 17:35:53.620523 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.620709 kubelet[2694]: W1212 17:35:53.620605 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.620709 kubelet[2694]: E1212 17:35:53.620620 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.620845 kubelet[2694]: E1212 17:35:53.620833 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.620899 kubelet[2694]: W1212 17:35:53.620889 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.620950 kubelet[2694]: E1212 17:35:53.620940 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.660438 kubelet[2694]: E1212 17:35:53.660413 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.660693 kubelet[2694]: W1212 17:35:53.660596 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.660693 kubelet[2694]: E1212 17:35:53.660621 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.661034 kubelet[2694]: E1212 17:35:53.661020 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.661152 kubelet[2694]: W1212 17:35:53.661089 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.661152 kubelet[2694]: E1212 17:35:53.661105 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.661404 kubelet[2694]: E1212 17:35:53.661383 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.661404 kubelet[2694]: W1212 17:35:53.661403 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.661487 kubelet[2694]: E1212 17:35:53.661418 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.661702 kubelet[2694]: E1212 17:35:53.661689 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.661702 kubelet[2694]: W1212 17:35:53.661701 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.661797 kubelet[2694]: E1212 17:35:53.661711 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.661930 kubelet[2694]: E1212 17:35:53.661916 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.661930 kubelet[2694]: W1212 17:35:53.661929 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.661981 kubelet[2694]: E1212 17:35:53.661938 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.662167 kubelet[2694]: E1212 17:35:53.662150 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.662167 kubelet[2694]: W1212 17:35:53.662166 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.662220 kubelet[2694]: E1212 17:35:53.662175 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.662642 kubelet[2694]: E1212 17:35:53.662558 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.662642 kubelet[2694]: W1212 17:35:53.662572 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.662642 kubelet[2694]: E1212 17:35:53.662584 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.662971 kubelet[2694]: E1212 17:35:53.662953 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.662971 kubelet[2694]: W1212 17:35:53.662968 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.663033 kubelet[2694]: E1212 17:35:53.662979 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.663206 kubelet[2694]: E1212 17:35:53.663194 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.663206 kubelet[2694]: W1212 17:35:53.663205 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.663267 kubelet[2694]: E1212 17:35:53.663214 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.663407 kubelet[2694]: E1212 17:35:53.663394 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.663441 kubelet[2694]: W1212 17:35:53.663407 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.663441 kubelet[2694]: E1212 17:35:53.663416 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.663681 kubelet[2694]: E1212 17:35:53.663667 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.663722 kubelet[2694]: W1212 17:35:53.663684 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.663722 kubelet[2694]: E1212 17:35:53.663694 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.664063 kubelet[2694]: E1212 17:35:53.663960 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.664063 kubelet[2694]: W1212 17:35:53.663975 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.664063 kubelet[2694]: E1212 17:35:53.663986 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.664211 kubelet[2694]: E1212 17:35:53.664200 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.664264 kubelet[2694]: W1212 17:35:53.664254 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.664311 kubelet[2694]: E1212 17:35:53.664302 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.664614 kubelet[2694]: E1212 17:35:53.664512 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.664614 kubelet[2694]: W1212 17:35:53.664524 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.664614 kubelet[2694]: E1212 17:35:53.664533 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.664773 kubelet[2694]: E1212 17:35:53.664760 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.664821 kubelet[2694]: W1212 17:35:53.664811 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.664880 kubelet[2694]: E1212 17:35:53.664871 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.665096 kubelet[2694]: E1212 17:35:53.665084 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.665155 kubelet[2694]: W1212 17:35:53.665144 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.665224 kubelet[2694]: E1212 17:35:53.665212 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.665651 kubelet[2694]: E1212 17:35:53.665419 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.665651 kubelet[2694]: W1212 17:35:53.665430 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.665651 kubelet[2694]: E1212 17:35:53.665439 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.665832 kubelet[2694]: E1212 17:35:53.665809 2694 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:35:53.665889 kubelet[2694]: W1212 17:35:53.665878 2694 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:35:53.665940 kubelet[2694]: E1212 17:35:53.665931 2694 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:35:53.932397 containerd[1532]: time="2025-12-12T17:35:53.932258188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:53.934098 containerd[1532]: time="2025-12-12T17:35:53.934058567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 12 17:35:53.935033 containerd[1532]: time="2025-12-12T17:35:53.934979576Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:53.937907 containerd[1532]: time="2025-12-12T17:35:53.937878686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:53.938706 containerd[1532]: time="2025-12-12T17:35:53.938653334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.191790093s" Dec 12 17:35:53.938706 containerd[1532]: time="2025-12-12T17:35:53.938680854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:35:53.943480 containerd[1532]: time="2025-12-12T17:35:53.943429903Z" level=info msg="CreateContainer within sandbox \"9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:35:53.951727 containerd[1532]: time="2025-12-12T17:35:53.951675228Z" level=info msg="Container 87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:53.955542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083821349.mount: Deactivated successfully. Dec 12 17:35:53.959961 containerd[1532]: time="2025-12-12T17:35:53.959923913Z" level=info msg="CreateContainer within sandbox \"9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509\"" Dec 12 17:35:53.961364 containerd[1532]: time="2025-12-12T17:35:53.961334728Z" level=info msg="StartContainer for \"87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509\"" Dec 12 17:35:53.962894 containerd[1532]: time="2025-12-12T17:35:53.962866583Z" level=info msg="connecting to shim 87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509" address="unix:///run/containerd/s/18482a5357fec30efd3067b20609478c2b51eb6535e0ff92e059c6ae9737d732" protocol=ttrpc version=3 Dec 12 17:35:53.985673 systemd[1]: Started cri-containerd-87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509.scope - libcontainer container 87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509. Dec 12 17:35:54.045449 containerd[1532]: time="2025-12-12T17:35:54.045408256Z" level=info msg="StartContainer for \"87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509\" returns successfully" Dec 12 17:35:54.060733 systemd[1]: cri-containerd-87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509.scope: Deactivated successfully. Dec 12 17:35:54.092795 containerd[1532]: time="2025-12-12T17:35:54.092687805Z" level=info msg="received container exit event container_id:\"87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509\" id:\"87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509\" pid:3411 exited_at:{seconds:1765560954 nanos:80691246}" Dec 12 17:35:54.138992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87210e1f89c0a58be53d20c7d66ea66808db23defe8a06816f084bba639ac509-rootfs.mount: Deactivated successfully. Dec 12 17:35:54.444210 kubelet[2694]: E1212 17:35:54.444152 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pvz6g" podUID="32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75" Dec 12 17:35:54.529490 kubelet[2694]: I1212 17:35:54.529124 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:35:54.529490 kubelet[2694]: E1212 17:35:54.529413 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:54.529490 kubelet[2694]: E1212 17:35:54.529436 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:54.531414 containerd[1532]: time="2025-12-12T17:35:54.531379791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:35:56.444631 kubelet[2694]: E1212 17:35:56.444566 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pvz6g" podUID="32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75" Dec 12 17:35:57.639297 containerd[1532]: time="2025-12-12T17:35:57.638620671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:57.639297 containerd[1532]: time="2025-12-12T17:35:57.639040234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 12 17:35:57.640012 containerd[1532]: time="2025-12-12T17:35:57.639979083Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:57.642003 containerd[1532]: time="2025-12-12T17:35:57.641973580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:35:57.642743 containerd[1532]: time="2025-12-12T17:35:57.642721347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.111299995s" Dec 12 17:35:57.642833 containerd[1532]: time="2025-12-12T17:35:57.642820828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:35:57.649382 containerd[1532]: time="2025-12-12T17:35:57.649328246Z" level=info msg="CreateContainer within sandbox \"9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:35:57.659482 containerd[1532]: time="2025-12-12T17:35:57.658604088Z" level=info msg="Container 4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:35:57.666212 containerd[1532]: time="2025-12-12T17:35:57.666180436Z" level=info msg="CreateContainer within sandbox \"9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40\"" Dec 12 17:35:57.666934 containerd[1532]: time="2025-12-12T17:35:57.666914082Z" level=info msg="StartContainer for \"4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40\"" Dec 12 17:35:57.668462 containerd[1532]: time="2025-12-12T17:35:57.668427216Z" level=info msg="connecting to shim 4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40" address="unix:///run/containerd/s/18482a5357fec30efd3067b20609478c2b51eb6535e0ff92e059c6ae9737d732" protocol=ttrpc version=3 Dec 12 17:35:57.688714 systemd[1]: Started cri-containerd-4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40.scope - libcontainer container 4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40. Dec 12 17:35:57.766157 containerd[1532]: time="2025-12-12T17:35:57.766082524Z" level=info msg="StartContainer for \"4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40\" returns successfully" Dec 12 17:35:58.294462 systemd[1]: cri-containerd-4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40.scope: Deactivated successfully. Dec 12 17:35:58.295183 systemd[1]: cri-containerd-4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40.scope: Consumed 471ms CPU time, 177.3M memory peak, 3.2M read from disk, 165.9M written to disk. Dec 12 17:35:58.307667 containerd[1532]: time="2025-12-12T17:35:58.307611927Z" level=info msg="received container exit event container_id:\"4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40\" id:\"4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40\" pid:3471 exited_at:{seconds:1765560958 nanos:307330965}" Dec 12 17:35:58.327016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cebe7531168b208b4d449701141b3360dcdf96286e05bb02797d09031628e40-rootfs.mount: Deactivated successfully. Dec 12 17:35:58.334503 kubelet[2694]: I1212 17:35:58.334122 2694 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 17:35:58.483108 systemd[1]: Created slice kubepods-burstable-pod0cf243bf_b4d2_42a4_b6f2_a03640dafbdf.slice - libcontainer container kubepods-burstable-pod0cf243bf_b4d2_42a4_b6f2_a03640dafbdf.slice. Dec 12 17:35:58.491371 systemd[1]: Created slice kubepods-besteffort-pod9bfd2383_be0f_4729_9e33_be9b6740c131.slice - libcontainer container kubepods-besteffort-pod9bfd2383_be0f_4729_9e33_be9b6740c131.slice. Dec 12 17:35:58.500038 systemd[1]: Created slice kubepods-besteffort-pod32dc6de8_fc2a_4f91_8ad6_a6e0a252ed75.slice - libcontainer container kubepods-besteffort-pod32dc6de8_fc2a_4f91_8ad6_a6e0a252ed75.slice. Dec 12 17:35:58.505727 systemd[1]: Created slice kubepods-burstable-podf61d5068_3614_49f1_a04f_ea15b840e763.slice - libcontainer container kubepods-burstable-podf61d5068_3614_49f1_a04f_ea15b840e763.slice. Dec 12 17:35:58.516066 systemd[1]: Created slice kubepods-besteffort-pod808a23f5_b1ce_476f_8868_bf6cd3d8ccdd.slice - libcontainer container kubepods-besteffort-pod808a23f5_b1ce_476f_8868_bf6cd3d8ccdd.slice. Dec 12 17:35:58.521830 containerd[1532]: time="2025-12-12T17:35:58.521635847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pvz6g,Uid:32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75,Namespace:calico-system,Attempt:0,}" Dec 12 17:35:58.524046 systemd[1]: Created slice kubepods-besteffort-pod5851147d_b983_4c29_b7c9_3b2c46491cf9.slice - libcontainer container kubepods-besteffort-pod5851147d_b983_4c29_b7c9_3b2c46491cf9.slice. Dec 12 17:35:58.533641 systemd[1]: Created slice kubepods-besteffort-podfa44fcd1_9f91_4b2a_b2d6_2f07eeafbaa0.slice - libcontainer container kubepods-besteffort-podfa44fcd1_9f91_4b2a_b2d6_2f07eeafbaa0.slice. Dec 12 17:35:58.540629 systemd[1]: Created slice kubepods-besteffort-pod1d63dbd1_5b8b_4814_ac94_b8332212f2ae.slice - libcontainer container kubepods-besteffort-pod1d63dbd1_5b8b_4814_ac94_b8332212f2ae.slice. Dec 12 17:35:58.545792 kubelet[2694]: E1212 17:35:58.545293 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:58.550229 containerd[1532]: time="2025-12-12T17:35:58.550187052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:35:58.598689 kubelet[2694]: I1212 17:35:58.598645 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cf243bf-b4d2-42a4-b6f2-a03640dafbdf-config-volume\") pod \"coredns-66bc5c9577-8vlzj\" (UID: \"0cf243bf-b4d2-42a4-b6f2-a03640dafbdf\") " pod="kube-system/coredns-66bc5c9577-8vlzj" Dec 12 17:35:58.598689 kubelet[2694]: I1212 17:35:58.598691 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5851147d-b983-4c29-b7c9-3b2c46491cf9-goldmane-key-pair\") pod \"goldmane-7c778bb748-p7wc8\" (UID: \"5851147d-b983-4c29-b7c9-3b2c46491cf9\") " pod="calico-system/goldmane-7c778bb748-p7wc8" Dec 12 17:35:58.598834 kubelet[2694]: I1212 17:35:58.598812 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9d2\" (UniqueName: \"kubernetes.io/projected/0cf243bf-b4d2-42a4-b6f2-a03640dafbdf-kube-api-access-7h9d2\") pod \"coredns-66bc5c9577-8vlzj\" (UID: \"0cf243bf-b4d2-42a4-b6f2-a03640dafbdf\") " pod="kube-system/coredns-66bc5c9577-8vlzj" Dec 12 17:35:58.598878 kubelet[2694]: I1212 17:35:58.598832 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-whisker-ca-bundle\") pod \"whisker-5f4c7667f4-qvtm2\" (UID: \"fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0\") " pod="calico-system/whisker-5f4c7667f4-qvtm2" Dec 12 17:35:58.598995 kubelet[2694]: I1212 17:35:58.598971 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f61d5068-3614-49f1-a04f-ea15b840e763-config-volume\") pod \"coredns-66bc5c9577-dtnp2\" (UID: \"f61d5068-3614-49f1-a04f-ea15b840e763\") " pod="kube-system/coredns-66bc5c9577-dtnp2" Dec 12 17:35:58.599030 kubelet[2694]: I1212 17:35:58.599000 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-whisker-backend-key-pair\") pod \"whisker-5f4c7667f4-qvtm2\" (UID: \"fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0\") " pod="calico-system/whisker-5f4c7667f4-qvtm2" Dec 12 17:35:58.599030 kubelet[2694]: I1212 17:35:58.599018 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/808a23f5-b1ce-476f-8868-bf6cd3d8ccdd-calico-apiserver-certs\") pod \"calico-apiserver-679d654f46-xhzpc\" (UID: \"808a23f5-b1ce-476f-8868-bf6cd3d8ccdd\") " pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" Dec 12 17:35:58.599152 kubelet[2694]: I1212 17:35:58.599132 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8j2m\" (UniqueName: \"kubernetes.io/projected/9bfd2383-be0f-4729-9e33-be9b6740c131-kube-api-access-n8j2m\") pod \"calico-apiserver-679d654f46-dpchc\" (UID: \"9bfd2383-be0f-4729-9e33-be9b6740c131\") " pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" Dec 12 17:35:58.599196 kubelet[2694]: I1212 17:35:58.599155 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr8bz\" (UniqueName: \"kubernetes.io/projected/f61d5068-3614-49f1-a04f-ea15b840e763-kube-api-access-zr8bz\") pod \"coredns-66bc5c9577-dtnp2\" (UID: \"f61d5068-3614-49f1-a04f-ea15b840e763\") " pod="kube-system/coredns-66bc5c9577-dtnp2" Dec 12 17:35:58.599196 kubelet[2694]: I1212 17:35:58.599170 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhj5\" (UniqueName: \"kubernetes.io/projected/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-kube-api-access-xbhj5\") pod \"whisker-5f4c7667f4-qvtm2\" (UID: \"fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0\") " pod="calico-system/whisker-5f4c7667f4-qvtm2" Dec 12 17:35:58.599307 kubelet[2694]: I1212 17:35:58.599287 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d63dbd1-5b8b-4814-ac94-b8332212f2ae-tigera-ca-bundle\") pod \"calico-kube-controllers-576cd48c6c-7tthz\" (UID: \"1d63dbd1-5b8b-4814-ac94-b8332212f2ae\") " pod="calico-system/calico-kube-controllers-576cd48c6c-7tthz" Dec 12 17:35:58.599344 kubelet[2694]: I1212 17:35:58.599310 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgnr2\" (UniqueName: \"kubernetes.io/projected/1d63dbd1-5b8b-4814-ac94-b8332212f2ae-kube-api-access-cgnr2\") pod \"calico-kube-controllers-576cd48c6c-7tthz\" (UID: \"1d63dbd1-5b8b-4814-ac94-b8332212f2ae\") " pod="calico-system/calico-kube-controllers-576cd48c6c-7tthz" Dec 12 17:35:58.599344 kubelet[2694]: I1212 17:35:58.599329 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5851147d-b983-4c29-b7c9-3b2c46491cf9-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-p7wc8\" (UID: \"5851147d-b983-4c29-b7c9-3b2c46491cf9\") " pod="calico-system/goldmane-7c778bb748-p7wc8" Dec 12 17:35:58.600079 kubelet[2694]: I1212 17:35:58.599482 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlhmc\" (UniqueName: \"kubernetes.io/projected/5851147d-b983-4c29-b7c9-3b2c46491cf9-kube-api-access-vlhmc\") pod \"goldmane-7c778bb748-p7wc8\" (UID: \"5851147d-b983-4c29-b7c9-3b2c46491cf9\") " pod="calico-system/goldmane-7c778bb748-p7wc8" Dec 12 17:35:58.600079 kubelet[2694]: I1212 17:35:58.599514 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46d4j\" (UniqueName: \"kubernetes.io/projected/808a23f5-b1ce-476f-8868-bf6cd3d8ccdd-kube-api-access-46d4j\") pod \"calico-apiserver-679d654f46-xhzpc\" (UID: \"808a23f5-b1ce-476f-8868-bf6cd3d8ccdd\") " pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" Dec 12 17:35:58.600079 kubelet[2694]: I1212 17:35:58.599671 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bfd2383-be0f-4729-9e33-be9b6740c131-calico-apiserver-certs\") pod \"calico-apiserver-679d654f46-dpchc\" (UID: \"9bfd2383-be0f-4729-9e33-be9b6740c131\") " pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" Dec 12 17:35:58.600079 kubelet[2694]: I1212 17:35:58.599710 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5851147d-b983-4c29-b7c9-3b2c46491cf9-config\") pod \"goldmane-7c778bb748-p7wc8\" (UID: \"5851147d-b983-4c29-b7c9-3b2c46491cf9\") " pod="calico-system/goldmane-7c778bb748-p7wc8" Dec 12 17:35:58.624488 containerd[1532]: time="2025-12-12T17:35:58.624412290Z" level=error msg="Failed to destroy network for sandbox \"b91767cfd11e75162982eb5564f6173c3793e21479afa93b90cedbc436fe9626\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.625872 containerd[1532]: time="2025-12-12T17:35:58.625821822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pvz6g,Uid:32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91767cfd11e75162982eb5564f6173c3793e21479afa93b90cedbc436fe9626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.626290 kubelet[2694]: E1212 17:35:58.626246 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91767cfd11e75162982eb5564f6173c3793e21479afa93b90cedbc436fe9626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.626358 kubelet[2694]: E1212 17:35:58.626322 2694 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91767cfd11e75162982eb5564f6173c3793e21479afa93b90cedbc436fe9626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pvz6g" Dec 12 17:35:58.626358 kubelet[2694]: E1212 17:35:58.626341 2694 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91767cfd11e75162982eb5564f6173c3793e21479afa93b90cedbc436fe9626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pvz6g" Dec 12 17:35:58.626426 kubelet[2694]: E1212 17:35:58.626398 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pvz6g_calico-system(32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pvz6g_calico-system(32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b91767cfd11e75162982eb5564f6173c3793e21479afa93b90cedbc436fe9626\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pvz6g" podUID="32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75" Dec 12 17:35:58.658302 systemd[1]: run-netns-cni\x2d65ac2f03\x2da6e6\x2d78d8\x2df465\x2d6896c19c466b.mount: Deactivated successfully. Dec 12 17:35:58.789843 kubelet[2694]: E1212 17:35:58.789795 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:58.790650 containerd[1532]: time="2025-12-12T17:35:58.790594958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8vlzj,Uid:0cf243bf-b4d2-42a4-b6f2-a03640dafbdf,Namespace:kube-system,Attempt:0,}" Dec 12 17:35:58.806823 containerd[1532]: time="2025-12-12T17:35:58.806721176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679d654f46-dpchc,Uid:9bfd2383-be0f-4729-9e33-be9b6740c131,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:35:58.811176 kubelet[2694]: E1212 17:35:58.811131 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:35:58.812488 containerd[1532]: time="2025-12-12T17:35:58.811680819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dtnp2,Uid:f61d5068-3614-49f1-a04f-ea15b840e763,Namespace:kube-system,Attempt:0,}" Dec 12 17:35:58.828820 containerd[1532]: time="2025-12-12T17:35:58.828774446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679d654f46-xhzpc,Uid:808a23f5-b1ce-476f-8868-bf6cd3d8ccdd,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:35:58.833628 containerd[1532]: time="2025-12-12T17:35:58.833459046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-p7wc8,Uid:5851147d-b983-4c29-b7c9-3b2c46491cf9,Namespace:calico-system,Attempt:0,}" Dec 12 17:35:58.842485 containerd[1532]: time="2025-12-12T17:35:58.842417483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4c7667f4-qvtm2,Uid:fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0,Namespace:calico-system,Attempt:0,}" Dec 12 17:35:58.846492 containerd[1532]: time="2025-12-12T17:35:58.846276196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576cd48c6c-7tthz,Uid:1d63dbd1-5b8b-4814-ac94-b8332212f2ae,Namespace:calico-system,Attempt:0,}" Dec 12 17:35:58.890294 containerd[1532]: time="2025-12-12T17:35:58.890247574Z" level=error msg="Failed to destroy network for sandbox \"defcd09d58202a5d99c177554fbf4ab24125cc22cf60aa85b12c7edf40d685df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.891962 containerd[1532]: time="2025-12-12T17:35:58.891910309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8vlzj,Uid:0cf243bf-b4d2-42a4-b6f2-a03640dafbdf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"defcd09d58202a5d99c177554fbf4ab24125cc22cf60aa85b12c7edf40d685df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.893357 kubelet[2694]: E1212 17:35:58.893295 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"defcd09d58202a5d99c177554fbf4ab24125cc22cf60aa85b12c7edf40d685df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.893479 kubelet[2694]: E1212 17:35:58.893380 2694 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"defcd09d58202a5d99c177554fbf4ab24125cc22cf60aa85b12c7edf40d685df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8vlzj" Dec 12 17:35:58.893479 kubelet[2694]: E1212 17:35:58.893403 2694 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"defcd09d58202a5d99c177554fbf4ab24125cc22cf60aa85b12c7edf40d685df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8vlzj" Dec 12 17:35:58.893763 kubelet[2694]: E1212 17:35:58.893716 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8vlzj_kube-system(0cf243bf-b4d2-42a4-b6f2-a03640dafbdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8vlzj_kube-system(0cf243bf-b4d2-42a4-b6f2-a03640dafbdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"defcd09d58202a5d99c177554fbf4ab24125cc22cf60aa85b12c7edf40d685df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8vlzj" podUID="0cf243bf-b4d2-42a4-b6f2-a03640dafbdf" Dec 12 17:35:58.912626 containerd[1532]: time="2025-12-12T17:35:58.912569286Z" level=error msg="Failed to destroy network for sandbox \"58b73fdba3b1f0778fc0603e94bdfcc3b50e2aa46c594144e387814f2c842657\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.914795 containerd[1532]: time="2025-12-12T17:35:58.914749185Z" level=error msg="Failed to destroy network for sandbox \"e64a30653d3f0d777d1002c62c7ed980e03a3d387fd22116b2184a6650876a53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.915324 containerd[1532]: time="2025-12-12T17:35:58.915259149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dtnp2,Uid:f61d5068-3614-49f1-a04f-ea15b840e763,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b73fdba3b1f0778fc0603e94bdfcc3b50e2aa46c594144e387814f2c842657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.915587 kubelet[2694]: E1212 17:35:58.915552 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b73fdba3b1f0778fc0603e94bdfcc3b50e2aa46c594144e387814f2c842657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.915646 kubelet[2694]: E1212 17:35:58.915605 2694 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b73fdba3b1f0778fc0603e94bdfcc3b50e2aa46c594144e387814f2c842657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dtnp2" Dec 12 17:35:58.915646 kubelet[2694]: E1212 17:35:58.915627 2694 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b73fdba3b1f0778fc0603e94bdfcc3b50e2aa46c594144e387814f2c842657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dtnp2" Dec 12 17:35:58.916317 kubelet[2694]: E1212 17:35:58.915681 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dtnp2_kube-system(f61d5068-3614-49f1-a04f-ea15b840e763)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dtnp2_kube-system(f61d5068-3614-49f1-a04f-ea15b840e763)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58b73fdba3b1f0778fc0603e94bdfcc3b50e2aa46c594144e387814f2c842657\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dtnp2" podUID="f61d5068-3614-49f1-a04f-ea15b840e763" Dec 12 17:35:58.917835 containerd[1532]: time="2025-12-12T17:35:58.917794771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679d654f46-dpchc,Uid:9bfd2383-be0f-4729-9e33-be9b6740c131,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e64a30653d3f0d777d1002c62c7ed980e03a3d387fd22116b2184a6650876a53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.918497 kubelet[2694]: E1212 17:35:58.918034 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e64a30653d3f0d777d1002c62c7ed980e03a3d387fd22116b2184a6650876a53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.918497 kubelet[2694]: E1212 17:35:58.918090 2694 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e64a30653d3f0d777d1002c62c7ed980e03a3d387fd22116b2184a6650876a53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" Dec 12 17:35:58.918497 kubelet[2694]: E1212 17:35:58.918108 2694 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e64a30653d3f0d777d1002c62c7ed980e03a3d387fd22116b2184a6650876a53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" Dec 12 17:35:58.918633 kubelet[2694]: E1212 17:35:58.918154 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-679d654f46-dpchc_calico-apiserver(9bfd2383-be0f-4729-9e33-be9b6740c131)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-679d654f46-dpchc_calico-apiserver(9bfd2383-be0f-4729-9e33-be9b6740c131)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e64a30653d3f0d777d1002c62c7ed980e03a3d387fd22116b2184a6650876a53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" podUID="9bfd2383-be0f-4729-9e33-be9b6740c131" Dec 12 17:35:58.929412 containerd[1532]: time="2025-12-12T17:35:58.928915947Z" level=error msg="Failed to destroy network for sandbox \"f8b1a77208f9a6e7ef6c25b6dfbb2b4f67f65924d1e8ea77d89b5f61063d1e13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.932813 containerd[1532]: time="2025-12-12T17:35:58.932753700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679d654f46-xhzpc,Uid:808a23f5-b1ce-476f-8868-bf6cd3d8ccdd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b1a77208f9a6e7ef6c25b6dfbb2b4f67f65924d1e8ea77d89b5f61063d1e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.933018 kubelet[2694]: E1212 17:35:58.932982 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b1a77208f9a6e7ef6c25b6dfbb2b4f67f65924d1e8ea77d89b5f61063d1e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.933119 kubelet[2694]: E1212 17:35:58.933035 2694 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b1a77208f9a6e7ef6c25b6dfbb2b4f67f65924d1e8ea77d89b5f61063d1e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" Dec 12 17:35:58.933119 kubelet[2694]: E1212 17:35:58.933054 2694 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b1a77208f9a6e7ef6c25b6dfbb2b4f67f65924d1e8ea77d89b5f61063d1e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" Dec 12 17:35:58.933119 kubelet[2694]: E1212 17:35:58.933100 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-679d654f46-xhzpc_calico-apiserver(808a23f5-b1ce-476f-8868-bf6cd3d8ccdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-679d654f46-xhzpc_calico-apiserver(808a23f5-b1ce-476f-8868-bf6cd3d8ccdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8b1a77208f9a6e7ef6c25b6dfbb2b4f67f65924d1e8ea77d89b5f61063d1e13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" podUID="808a23f5-b1ce-476f-8868-bf6cd3d8ccdd" Dec 12 17:35:58.936179 containerd[1532]: time="2025-12-12T17:35:58.936122288Z" level=error msg="Failed to destroy network for sandbox \"3d818f7593373046c441387a7e80cd3e1673e355f52b30ede86365fb9941ff69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.937618 containerd[1532]: time="2025-12-12T17:35:58.937572861Z" level=error msg="Failed to destroy network for sandbox \"c34a95b93322375adc4ad3788a5bd6d573cc8efadc99c944943a396788061ced\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.937695 containerd[1532]: time="2025-12-12T17:35:58.937623061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-p7wc8,Uid:5851147d-b983-4c29-b7c9-3b2c46491cf9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d818f7593373046c441387a7e80cd3e1673e355f52b30ede86365fb9941ff69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.938377 kubelet[2694]: E1212 17:35:58.937916 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d818f7593373046c441387a7e80cd3e1673e355f52b30ede86365fb9941ff69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.938377 kubelet[2694]: E1212 17:35:58.937974 2694 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d818f7593373046c441387a7e80cd3e1673e355f52b30ede86365fb9941ff69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-p7wc8" Dec 12 17:35:58.938377 kubelet[2694]: E1212 17:35:58.937996 2694 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d818f7593373046c441387a7e80cd3e1673e355f52b30ede86365fb9941ff69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-p7wc8" Dec 12 17:35:58.938554 kubelet[2694]: E1212 17:35:58.938044 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-p7wc8_calico-system(5851147d-b983-4c29-b7c9-3b2c46491cf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-p7wc8_calico-system(5851147d-b983-4c29-b7c9-3b2c46491cf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d818f7593373046c441387a7e80cd3e1673e355f52b30ede86365fb9941ff69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-p7wc8" podUID="5851147d-b983-4c29-b7c9-3b2c46491cf9" Dec 12 17:35:58.939072 containerd[1532]: time="2025-12-12T17:35:58.939037394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576cd48c6c-7tthz,Uid:1d63dbd1-5b8b-4814-ac94-b8332212f2ae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c34a95b93322375adc4ad3788a5bd6d573cc8efadc99c944943a396788061ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.939238 kubelet[2694]: E1212 17:35:58.939207 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c34a95b93322375adc4ad3788a5bd6d573cc8efadc99c944943a396788061ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.939275 kubelet[2694]: E1212 17:35:58.939249 2694 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c34a95b93322375adc4ad3788a5bd6d573cc8efadc99c944943a396788061ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-576cd48c6c-7tthz" Dec 12 17:35:58.939275 kubelet[2694]: E1212 17:35:58.939267 2694 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c34a95b93322375adc4ad3788a5bd6d573cc8efadc99c944943a396788061ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-576cd48c6c-7tthz" Dec 12 17:35:58.939323 kubelet[2694]: E1212 17:35:58.939301 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-576cd48c6c-7tthz_calico-system(1d63dbd1-5b8b-4814-ac94-b8332212f2ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-576cd48c6c-7tthz_calico-system(1d63dbd1-5b8b-4814-ac94-b8332212f2ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c34a95b93322375adc4ad3788a5bd6d573cc8efadc99c944943a396788061ced\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-576cd48c6c-7tthz" podUID="1d63dbd1-5b8b-4814-ac94-b8332212f2ae" Dec 12 17:35:58.952376 containerd[1532]: time="2025-12-12T17:35:58.952321508Z" level=error msg="Failed to destroy network for sandbox \"5f31addb89da75a628ce6fe442c24d24a3324e24ca822e89f66980bfa36b3f7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.954389 containerd[1532]: time="2025-12-12T17:35:58.954356285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f4c7667f4-qvtm2,Uid:fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f31addb89da75a628ce6fe442c24d24a3324e24ca822e89f66980bfa36b3f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.954651 kubelet[2694]: E1212 17:35:58.954603 2694 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f31addb89da75a628ce6fe442c24d24a3324e24ca822e89f66980bfa36b3f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:35:58.954699 kubelet[2694]: E1212 17:35:58.954659 2694 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f31addb89da75a628ce6fe442c24d24a3324e24ca822e89f66980bfa36b3f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4c7667f4-qvtm2" Dec 12 17:35:58.954699 kubelet[2694]: E1212 17:35:58.954682 2694 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f31addb89da75a628ce6fe442c24d24a3324e24ca822e89f66980bfa36b3f7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f4c7667f4-qvtm2" Dec 12 17:35:58.954756 kubelet[2694]: E1212 17:35:58.954727 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f4c7667f4-qvtm2_calico-system(fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f4c7667f4-qvtm2_calico-system(fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f31addb89da75a628ce6fe442c24d24a3324e24ca822e89f66980bfa36b3f7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f4c7667f4-qvtm2" podUID="fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0" Dec 12 17:35:59.967707 kubelet[2694]: E1212 17:35:59.967667 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:00.547978 kubelet[2694]: E1212 17:36:00.547920 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:02.400494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679610570.mount: Deactivated successfully. Dec 12 17:36:02.638545 containerd[1532]: time="2025-12-12T17:36:02.638491531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 12 17:36:02.656150 containerd[1532]: time="2025-12-12T17:36:02.655692301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:02.657760 containerd[1532]: time="2025-12-12T17:36:02.657704437Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:02.661913 containerd[1532]: time="2025-12-12T17:36:02.661875108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:36:02.662483 containerd[1532]: time="2025-12-12T17:36:02.662375032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.11214722s" Dec 12 17:36:02.662483 containerd[1532]: time="2025-12-12T17:36:02.662409832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:36:02.681083 containerd[1532]: time="2025-12-12T17:36:02.681036213Z" level=info msg="CreateContainer within sandbox \"9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:36:02.688860 containerd[1532]: time="2025-12-12T17:36:02.688819952Z" level=info msg="Container 3bbd4bfa2b6b135c4bb2f69b3b9b43e997061b7b28d6bc3060d8a2408c41f939: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:02.691928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969135211.mount: Deactivated successfully. Dec 12 17:36:02.699417 containerd[1532]: time="2025-12-12T17:36:02.699373112Z" level=info msg="CreateContainer within sandbox \"9d509a7ab0dce1462767205439c9e1c8ec5e7fea7a905aafe9194ad94e2545f5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3bbd4bfa2b6b135c4bb2f69b3b9b43e997061b7b28d6bc3060d8a2408c41f939\"" Dec 12 17:36:02.700252 containerd[1532]: time="2025-12-12T17:36:02.700223599Z" level=info msg="StartContainer for \"3bbd4bfa2b6b135c4bb2f69b3b9b43e997061b7b28d6bc3060d8a2408c41f939\"" Dec 12 17:36:02.701988 containerd[1532]: time="2025-12-12T17:36:02.701941212Z" level=info msg="connecting to shim 3bbd4bfa2b6b135c4bb2f69b3b9b43e997061b7b28d6bc3060d8a2408c41f939" address="unix:///run/containerd/s/18482a5357fec30efd3067b20609478c2b51eb6535e0ff92e059c6ae9737d732" protocol=ttrpc version=3 Dec 12 17:36:02.734730 systemd[1]: Started cri-containerd-3bbd4bfa2b6b135c4bb2f69b3b9b43e997061b7b28d6bc3060d8a2408c41f939.scope - libcontainer container 3bbd4bfa2b6b135c4bb2f69b3b9b43e997061b7b28d6bc3060d8a2408c41f939. Dec 12 17:36:02.829715 containerd[1532]: time="2025-12-12T17:36:02.829668980Z" level=info msg="StartContainer for \"3bbd4bfa2b6b135c4bb2f69b3b9b43e997061b7b28d6bc3060d8a2408c41f939\" returns successfully" Dec 12 17:36:02.959294 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:36:02.959413 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:36:03.232180 kubelet[2694]: I1212 17:36:03.232002 2694 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-whisker-backend-key-pair\") pod \"fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0\" (UID: \"fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0\") " Dec 12 17:36:03.232180 kubelet[2694]: I1212 17:36:03.232051 2694 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbhj5\" (UniqueName: \"kubernetes.io/projected/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-kube-api-access-xbhj5\") pod \"fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0\" (UID: \"fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0\") " Dec 12 17:36:03.232180 kubelet[2694]: I1212 17:36:03.232085 2694 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-whisker-ca-bundle\") pod \"fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0\" (UID: \"fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0\") " Dec 12 17:36:03.242718 kubelet[2694]: I1212 17:36:03.242665 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0" (UID: "fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:36:03.243497 kubelet[2694]: I1212 17:36:03.243438 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-kube-api-access-xbhj5" (OuterVolumeSpecName: "kube-api-access-xbhj5") pod "fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0" (UID: "fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0"). InnerVolumeSpecName "kube-api-access-xbhj5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:36:03.248238 kubelet[2694]: I1212 17:36:03.248155 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0" (UID: "fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:36:03.333136 kubelet[2694]: I1212 17:36:03.333063 2694 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.333136 kubelet[2694]: I1212 17:36:03.333101 2694 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.333136 kubelet[2694]: I1212 17:36:03.333111 2694 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xbhj5\" (UniqueName: \"kubernetes.io/projected/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0-kube-api-access-xbhj5\") on node \"localhost\" DevicePath \"\"" Dec 12 17:36:03.401374 systemd[1]: var-lib-kubelet-pods-fa44fcd1\x2d9f91\x2d4b2a\x2db2d6\x2d2f07eeafbaa0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxbhj5.mount: Deactivated successfully. Dec 12 17:36:03.401497 systemd[1]: var-lib-kubelet-pods-fa44fcd1\x2d9f91\x2d4b2a\x2db2d6\x2d2f07eeafbaa0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:36:03.578932 kubelet[2694]: E1212 17:36:03.578827 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:03.579296 systemd[1]: Removed slice kubepods-besteffort-podfa44fcd1_9f91_4b2a_b2d6_2f07eeafbaa0.slice - libcontainer container kubepods-besteffort-podfa44fcd1_9f91_4b2a_b2d6_2f07eeafbaa0.slice. Dec 12 17:36:03.622858 kubelet[2694]: I1212 17:36:03.622787 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rvbcv" podStartSLOduration=2.420094677 podStartE2EDuration="14.617697378s" podCreationTimestamp="2025-12-12 17:35:49 +0000 UTC" firstStartedPulling="2025-12-12 17:35:50.46598094 +0000 UTC m=+24.119354342" lastFinishedPulling="2025-12-12 17:36:02.663583641 +0000 UTC m=+36.316957043" observedRunningTime="2025-12-12 17:36:03.595178692 +0000 UTC m=+37.248552094" watchObservedRunningTime="2025-12-12 17:36:03.617697378 +0000 UTC m=+37.271070780" Dec 12 17:36:03.668949 systemd[1]: Created slice kubepods-besteffort-pod011eaf78_fad0_4e3a_a45a_ec2b91088ae0.slice - libcontainer container kubepods-besteffort-pod011eaf78_fad0_4e3a_a45a_ec2b91088ae0.slice. Dec 12 17:36:03.735623 kubelet[2694]: I1212 17:36:03.735566 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/011eaf78-fad0-4e3a-a45a-ec2b91088ae0-whisker-backend-key-pair\") pod \"whisker-f869957b4-2ss98\" (UID: \"011eaf78-fad0-4e3a-a45a-ec2b91088ae0\") " pod="calico-system/whisker-f869957b4-2ss98" Dec 12 17:36:03.735623 kubelet[2694]: I1212 17:36:03.735611 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/011eaf78-fad0-4e3a-a45a-ec2b91088ae0-whisker-ca-bundle\") pod \"whisker-f869957b4-2ss98\" (UID: \"011eaf78-fad0-4e3a-a45a-ec2b91088ae0\") " pod="calico-system/whisker-f869957b4-2ss98" Dec 12 17:36:03.735623 kubelet[2694]: I1212 17:36:03.735636 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8d6\" (UniqueName: \"kubernetes.io/projected/011eaf78-fad0-4e3a-a45a-ec2b91088ae0-kube-api-access-7n8d6\") pod \"whisker-f869957b4-2ss98\" (UID: \"011eaf78-fad0-4e3a-a45a-ec2b91088ae0\") " pod="calico-system/whisker-f869957b4-2ss98" Dec 12 17:36:03.985481 containerd[1532]: time="2025-12-12T17:36:03.985435805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f869957b4-2ss98,Uid:011eaf78-fad0-4e3a-a45a-ec2b91088ae0,Namespace:calico-system,Attempt:0,}" Dec 12 17:36:04.196841 systemd-networkd[1449]: cali7b356fd691c: Link UP Dec 12 17:36:04.199391 systemd-networkd[1449]: cali7b356fd691c: Gained carrier Dec 12 17:36:04.211307 containerd[1532]: 2025-12-12 17:36:04.018 [INFO][3859] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:36:04.211307 containerd[1532]: 2025-12-12 17:36:04.066 [INFO][3859] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--f869957b4--2ss98-eth0 whisker-f869957b4- calico-system 011eaf78-fad0-4e3a-a45a-ec2b91088ae0 919 0 2025-12-12 17:36:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f869957b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-f869957b4-2ss98 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7b356fd691c [] [] }} ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Namespace="calico-system" Pod="whisker-f869957b4-2ss98" WorkloadEndpoint="localhost-k8s-whisker--f869957b4--2ss98-" Dec 12 17:36:04.211307 containerd[1532]: 2025-12-12 17:36:04.066 [INFO][3859] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Namespace="calico-system" Pod="whisker-f869957b4-2ss98" WorkloadEndpoint="localhost-k8s-whisker--f869957b4--2ss98-eth0" Dec 12 17:36:04.211307 containerd[1532]: 2025-12-12 17:36:04.143 [INFO][3874] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" HandleID="k8s-pod-network.cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Workload="localhost-k8s-whisker--f869957b4--2ss98-eth0" Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.143 [INFO][3874] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" HandleID="k8s-pod-network.cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Workload="localhost-k8s-whisker--f869957b4--2ss98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000135890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-f869957b4-2ss98", "timestamp":"2025-12-12 17:36:04.143214858 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.143 [INFO][3874] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.143 [INFO][3874] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.143 [INFO][3874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.156 [INFO][3874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" host="localhost" Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.163 [INFO][3874] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.168 [INFO][3874] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.170 [INFO][3874] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.173 [INFO][3874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:04.211559 containerd[1532]: 2025-12-12 17:36:04.173 [INFO][3874] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" host="localhost" Dec 12 17:36:04.211821 containerd[1532]: 2025-12-12 17:36:04.175 [INFO][3874] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2 Dec 12 17:36:04.211821 containerd[1532]: 2025-12-12 17:36:04.179 [INFO][3874] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" host="localhost" Dec 12 17:36:04.211821 containerd[1532]: 2025-12-12 17:36:04.185 [INFO][3874] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" host="localhost" Dec 12 17:36:04.211821 containerd[1532]: 2025-12-12 17:36:04.185 [INFO][3874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" host="localhost" Dec 12 17:36:04.211821 containerd[1532]: 2025-12-12 17:36:04.185 [INFO][3874] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:36:04.211821 containerd[1532]: 2025-12-12 17:36:04.185 [INFO][3874] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" HandleID="k8s-pod-network.cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Workload="localhost-k8s-whisker--f869957b4--2ss98-eth0" Dec 12 17:36:04.211940 containerd[1532]: 2025-12-12 17:36:04.188 [INFO][3859] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Namespace="calico-system" Pod="whisker-f869957b4-2ss98" WorkloadEndpoint="localhost-k8s-whisker--f869957b4--2ss98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f869957b4--2ss98-eth0", GenerateName:"whisker-f869957b4-", Namespace:"calico-system", SelfLink:"", UID:"011eaf78-fad0-4e3a-a45a-ec2b91088ae0", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 36, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f869957b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-f869957b4-2ss98", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7b356fd691c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:04.211940 containerd[1532]: 2025-12-12 17:36:04.188 [INFO][3859] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Namespace="calico-system" Pod="whisker-f869957b4-2ss98" WorkloadEndpoint="localhost-k8s-whisker--f869957b4--2ss98-eth0" Dec 12 17:36:04.212006 containerd[1532]: 2025-12-12 17:36:04.188 [INFO][3859] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b356fd691c ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Namespace="calico-system" Pod="whisker-f869957b4-2ss98" WorkloadEndpoint="localhost-k8s-whisker--f869957b4--2ss98-eth0" Dec 12 17:36:04.212006 containerd[1532]: 2025-12-12 17:36:04.197 [INFO][3859] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Namespace="calico-system" Pod="whisker-f869957b4-2ss98" WorkloadEndpoint="localhost-k8s-whisker--f869957b4--2ss98-eth0" Dec 12 17:36:04.212046 containerd[1532]: 2025-12-12 17:36:04.197 [INFO][3859] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Namespace="calico-system" Pod="whisker-f869957b4-2ss98" WorkloadEndpoint="localhost-k8s-whisker--f869957b4--2ss98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f869957b4--2ss98-eth0", GenerateName:"whisker-f869957b4-", Namespace:"calico-system", SelfLink:"", UID:"011eaf78-fad0-4e3a-a45a-ec2b91088ae0", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 36, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f869957b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2", Pod:"whisker-f869957b4-2ss98", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7b356fd691c", MAC:"da:f4:39:69:bf:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:04.212088 containerd[1532]: 2025-12-12 17:36:04.206 [INFO][3859] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" Namespace="calico-system" Pod="whisker-f869957b4-2ss98" WorkloadEndpoint="localhost-k8s-whisker--f869957b4--2ss98-eth0" Dec 12 17:36:04.262592 containerd[1532]: time="2025-12-12T17:36:04.261990908Z" level=info msg="connecting to shim cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2" address="unix:///run/containerd/s/7dc707f16f75d2501811824db95b346ee359791f086a3bc7e783d5b7e0e18171" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:04.290668 systemd[1]: Started cri-containerd-cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2.scope - libcontainer container cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2. Dec 12 17:36:04.302523 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:04.339742 containerd[1532]: time="2025-12-12T17:36:04.339535223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f869957b4-2ss98,Uid:011eaf78-fad0-4e3a-a45a-ec2b91088ae0,Namespace:calico-system,Attempt:0,} returns sandbox id \"cff1274d3fe39af1016931d515f830ab02591ca596387f4603ef14a49c19bee2\"" Dec 12 17:36:04.343327 containerd[1532]: time="2025-12-12T17:36:04.342774847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:36:04.449715 kubelet[2694]: I1212 17:36:04.449014 2694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0" path="/var/lib/kubelet/pods/fa44fcd1-9f91-4b2a-b2d6-2f07eeafbaa0/volumes" Dec 12 17:36:04.519913 containerd[1532]: time="2025-12-12T17:36:04.519604033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:04.557313 containerd[1532]: time="2025-12-12T17:36:04.557237502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:36:04.557433 containerd[1532]: time="2025-12-12T17:36:04.557270222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:36:04.559692 kubelet[2694]: E1212 17:36:04.559404 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:36:04.562137 kubelet[2694]: E1212 17:36:04.562086 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:36:04.563098 kubelet[2694]: E1212 17:36:04.562917 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-f869957b4-2ss98_calico-system(011eaf78-fad0-4e3a-a45a-ec2b91088ae0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:04.564969 containerd[1532]: time="2025-12-12T17:36:04.564931757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:36:04.575672 kubelet[2694]: I1212 17:36:04.575598 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:36:04.578988 kubelet[2694]: E1212 17:36:04.578930 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:04.745126 containerd[1532]: time="2025-12-12T17:36:04.745080167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:04.746036 containerd[1532]: time="2025-12-12T17:36:04.745907893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:36:04.746036 containerd[1532]: time="2025-12-12T17:36:04.745971253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:36:04.746213 kubelet[2694]: E1212 17:36:04.746139 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:36:04.746213 kubelet[2694]: E1212 17:36:04.746198 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:36:04.746503 kubelet[2694]: E1212 17:36:04.746282 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-f869957b4-2ss98_calico-system(011eaf78-fad0-4e3a-a45a-ec2b91088ae0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:04.748030 kubelet[2694]: E1212 17:36:04.746588 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f869957b4-2ss98" podUID="011eaf78-fad0-4e3a-a45a-ec2b91088ae0" Dec 12 17:36:04.767812 systemd-networkd[1449]: vxlan.calico: Link UP Dec 12 17:36:04.767819 systemd-networkd[1449]: vxlan.calico: Gained carrier Dec 12 17:36:05.578647 kubelet[2694]: E1212 17:36:05.578564 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f869957b4-2ss98" podUID="011eaf78-fad0-4e3a-a45a-ec2b91088ae0" Dec 12 17:36:06.221631 systemd-networkd[1449]: cali7b356fd691c: Gained IPv6LL Dec 12 17:36:06.606698 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Dec 12 17:36:08.672326 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:49328.service - OpenSSH per-connection server daemon (10.0.0.1:49328). Dec 12 17:36:08.735829 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 49328 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:08.737286 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:08.741292 systemd-logind[1517]: New session 8 of user core. Dec 12 17:36:08.747652 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:36:08.905413 sshd[4147]: Connection closed by 10.0.0.1 port 49328 Dec 12 17:36:08.905758 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:08.910064 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:49328.service: Deactivated successfully. Dec 12 17:36:08.911694 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:36:08.913305 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:36:08.914616 systemd-logind[1517]: Removed session 8. Dec 12 17:36:10.485416 containerd[1532]: time="2025-12-12T17:36:10.485292497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679d654f46-dpchc,Uid:9bfd2383-be0f-4729-9e33-be9b6740c131,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:36:10.491875 containerd[1532]: time="2025-12-12T17:36:10.491141373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-p7wc8,Uid:5851147d-b983-4c29-b7c9-3b2c46491cf9,Namespace:calico-system,Attempt:0,}" Dec 12 17:36:10.624018 systemd-networkd[1449]: cali79aa2c1726e: Link UP Dec 12 17:36:10.624718 systemd-networkd[1449]: cali79aa2c1726e: Gained carrier Dec 12 17:36:10.642717 containerd[1532]: 2025-12-12 17:36:10.533 [INFO][4176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0 calico-apiserver-679d654f46- calico-apiserver 9bfd2383-be0f-4729-9e33-be9b6740c131 844 0 2025-12-12 17:35:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:679d654f46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-679d654f46-dpchc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79aa2c1726e [] [] }} ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-dpchc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--dpchc-" Dec 12 17:36:10.642717 containerd[1532]: 2025-12-12 17:36:10.533 [INFO][4176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-dpchc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" Dec 12 17:36:10.642717 containerd[1532]: 2025-12-12 17:36:10.567 [INFO][4201] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" HandleID="k8s-pod-network.5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Workload="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.567 [INFO][4201] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" HandleID="k8s-pod-network.5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Workload="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001368c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-679d654f46-dpchc", "timestamp":"2025-12-12 17:36:10.567638686 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.567 [INFO][4201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.567 [INFO][4201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.567 [INFO][4201] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.584 [INFO][4201] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" host="localhost" Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.592 [INFO][4201] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.601 [INFO][4201] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.604 [INFO][4201] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.606 [INFO][4201] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:10.643363 containerd[1532]: 2025-12-12 17:36:10.607 [INFO][4201] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" host="localhost" Dec 12 17:36:10.643754 containerd[1532]: 2025-12-12 17:36:10.609 [INFO][4201] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d Dec 12 17:36:10.643754 containerd[1532]: 2025-12-12 17:36:10.613 [INFO][4201] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" host="localhost" Dec 12 17:36:10.643754 containerd[1532]: 2025-12-12 17:36:10.618 [INFO][4201] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" host="localhost" Dec 12 17:36:10.643754 containerd[1532]: 2025-12-12 17:36:10.619 [INFO][4201] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" host="localhost" Dec 12 17:36:10.643754 containerd[1532]: 2025-12-12 17:36:10.619 [INFO][4201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:36:10.643754 containerd[1532]: 2025-12-12 17:36:10.619 [INFO][4201] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" HandleID="k8s-pod-network.5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Workload="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" Dec 12 17:36:10.643977 containerd[1532]: 2025-12-12 17:36:10.621 [INFO][4176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-dpchc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0", GenerateName:"calico-apiserver-679d654f46-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bfd2383-be0f-4729-9e33-be9b6740c131", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679d654f46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-679d654f46-dpchc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79aa2c1726e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:10.644059 containerd[1532]: 2025-12-12 17:36:10.621 [INFO][4176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-dpchc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" Dec 12 17:36:10.644059 containerd[1532]: 2025-12-12 17:36:10.621 [INFO][4176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79aa2c1726e ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-dpchc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" Dec 12 17:36:10.644059 containerd[1532]: 2025-12-12 17:36:10.625 [INFO][4176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-dpchc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" Dec 12 17:36:10.644152 containerd[1532]: 2025-12-12 17:36:10.626 [INFO][4176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-dpchc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0", GenerateName:"calico-apiserver-679d654f46-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bfd2383-be0f-4729-9e33-be9b6740c131", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679d654f46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d", Pod:"calico-apiserver-679d654f46-dpchc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79aa2c1726e", MAC:"7e:07:60:20:a4:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:10.644282 containerd[1532]: 2025-12-12 17:36:10.640 [INFO][4176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-dpchc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--dpchc-eth0" Dec 12 17:36:10.675943 containerd[1532]: time="2025-12-12T17:36:10.675886636Z" level=info msg="connecting to shim 5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d" address="unix:///run/containerd/s/4438559ff08c045d6b20887ef3fcd972442bdeddfdc65f22961d61f4e8243ba4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:10.701689 systemd[1]: Started cri-containerd-5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d.scope - libcontainer container 5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d. Dec 12 17:36:10.719502 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:10.736986 systemd-networkd[1449]: calicd6f7cc49af: Link UP Dec 12 17:36:10.739056 systemd-networkd[1449]: calicd6f7cc49af: Gained carrier Dec 12 17:36:10.762155 containerd[1532]: 2025-12-12 17:36:10.565 [INFO][4187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--p7wc8-eth0 goldmane-7c778bb748- calico-system 5851147d-b983-4c29-b7c9-3b2c46491cf9 849 0 2025-12-12 17:35:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-p7wc8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicd6f7cc49af [] [] }} ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Namespace="calico-system" Pod="goldmane-7c778bb748-p7wc8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--p7wc8-" Dec 12 17:36:10.762155 containerd[1532]: 2025-12-12 17:36:10.566 [INFO][4187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Namespace="calico-system" Pod="goldmane-7c778bb748-p7wc8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" Dec 12 17:36:10.762155 containerd[1532]: 2025-12-12 17:36:10.605 [INFO][4212] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" HandleID="k8s-pod-network.f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Workload="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.606 [INFO][4212] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" HandleID="k8s-pod-network.f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Workload="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000523140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-p7wc8", "timestamp":"2025-12-12 17:36:10.605704402 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.606 [INFO][4212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.619 [INFO][4212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.619 [INFO][4212] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.684 [INFO][4212] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" host="localhost" Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.692 [INFO][4212] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.701 [INFO][4212] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.703 [INFO][4212] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.709 [INFO][4212] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:10.762361 containerd[1532]: 2025-12-12 17:36:10.710 [INFO][4212] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" host="localhost" Dec 12 17:36:10.762663 containerd[1532]: 2025-12-12 17:36:10.712 [INFO][4212] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359 Dec 12 17:36:10.762663 containerd[1532]: 2025-12-12 17:36:10.717 [INFO][4212] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" host="localhost" Dec 12 17:36:10.762663 containerd[1532]: 2025-12-12 17:36:10.730 [INFO][4212] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" host="localhost" Dec 12 17:36:10.762663 containerd[1532]: 2025-12-12 17:36:10.730 [INFO][4212] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" host="localhost" Dec 12 17:36:10.762663 containerd[1532]: 2025-12-12 17:36:10.730 [INFO][4212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:36:10.762663 containerd[1532]: 2025-12-12 17:36:10.730 [INFO][4212] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" HandleID="k8s-pod-network.f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Workload="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" Dec 12 17:36:10.762780 containerd[1532]: 2025-12-12 17:36:10.734 [INFO][4187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Namespace="calico-system" Pod="goldmane-7c778bb748-p7wc8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--p7wc8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"5851147d-b983-4c29-b7c9-3b2c46491cf9", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-p7wc8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicd6f7cc49af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:10.762780 containerd[1532]: 2025-12-12 17:36:10.734 [INFO][4187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Namespace="calico-system" Pod="goldmane-7c778bb748-p7wc8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" Dec 12 17:36:10.762849 containerd[1532]: 2025-12-12 17:36:10.734 [INFO][4187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd6f7cc49af ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Namespace="calico-system" Pod="goldmane-7c778bb748-p7wc8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" Dec 12 17:36:10.762849 containerd[1532]: 2025-12-12 17:36:10.741 [INFO][4187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Namespace="calico-system" Pod="goldmane-7c778bb748-p7wc8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" Dec 12 17:36:10.762888 containerd[1532]: 2025-12-12 17:36:10.742 [INFO][4187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Namespace="calico-system" Pod="goldmane-7c778bb748-p7wc8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--p7wc8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"5851147d-b983-4c29-b7c9-3b2c46491cf9", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359", Pod:"goldmane-7c778bb748-p7wc8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicd6f7cc49af", MAC:"1e:bb:dd:61:d2:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:10.762946 containerd[1532]: 2025-12-12 17:36:10.755 [INFO][4187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" Namespace="calico-system" Pod="goldmane-7c778bb748-p7wc8" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--p7wc8-eth0" Dec 12 17:36:10.762946 containerd[1532]: time="2025-12-12T17:36:10.762199770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679d654f46-dpchc,Uid:9bfd2383-be0f-4729-9e33-be9b6740c131,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5d9e802eb2e1a3bbe46fe77e18eeb2b8235a9c540907e154ae6d2c3e110b6c4d\"" Dec 12 17:36:10.764660 containerd[1532]: time="2025-12-12T17:36:10.764547225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:36:10.790017 containerd[1532]: time="2025-12-12T17:36:10.789134657Z" level=info msg="connecting to shim f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359" address="unix:///run/containerd/s/e834ccf798302d75feab2b782f14805f73c3ec01a9689961080198bd7d7b3747" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:10.809670 systemd[1]: Started cri-containerd-f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359.scope - libcontainer container f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359. Dec 12 17:36:10.821798 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:10.847051 containerd[1532]: time="2025-12-12T17:36:10.846996735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-p7wc8,Uid:5851147d-b983-4c29-b7c9-3b2c46491cf9,Namespace:calico-system,Attempt:0,} returns sandbox id \"f123c1eeeb1d78901c85ea4416e06137f99b795ff671f882f48678ce454dd359\"" Dec 12 17:36:10.947912 containerd[1532]: time="2025-12-12T17:36:10.947852278Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:10.948964 containerd[1532]: time="2025-12-12T17:36:10.948916565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:36:10.949053 containerd[1532]: time="2025-12-12T17:36:10.949036966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:36:10.949202 kubelet[2694]: E1212 17:36:10.949166 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:36:10.950348 kubelet[2694]: E1212 17:36:10.949215 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:36:10.950348 kubelet[2694]: E1212 17:36:10.949367 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-679d654f46-dpchc_calico-apiserver(9bfd2383-be0f-4729-9e33-be9b6740c131): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:10.950348 kubelet[2694]: E1212 17:36:10.949426 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" podUID="9bfd2383-be0f-4729-9e33-be9b6740c131" Dec 12 17:36:10.950673 containerd[1532]: time="2025-12-12T17:36:10.950121812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:36:11.152242 containerd[1532]: time="2025-12-12T17:36:11.152107323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:11.153661 containerd[1532]: time="2025-12-12T17:36:11.153593772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:36:11.153736 containerd[1532]: time="2025-12-12T17:36:11.153691932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:36:11.153898 kubelet[2694]: E1212 17:36:11.153830 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:36:11.153898 kubelet[2694]: E1212 17:36:11.153883 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:36:11.153979 kubelet[2694]: E1212 17:36:11.153967 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-p7wc8_calico-system(5851147d-b983-4c29-b7c9-3b2c46491cf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:11.154030 kubelet[2694]: E1212 17:36:11.154002 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-p7wc8" podUID="5851147d-b983-4c29-b7c9-3b2c46491cf9" Dec 12 17:36:11.448498 containerd[1532]: time="2025-12-12T17:36:11.448074395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679d654f46-xhzpc,Uid:808a23f5-b1ce-476f-8868-bf6cd3d8ccdd,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:36:11.553689 systemd-networkd[1449]: cali821934b603c: Link UP Dec 12 17:36:11.554600 systemd-networkd[1449]: cali821934b603c: Gained carrier Dec 12 17:36:11.570420 containerd[1532]: 2025-12-12 17:36:11.482 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0 calico-apiserver-679d654f46- calico-apiserver 808a23f5-b1ce-476f-8868-bf6cd3d8ccdd 846 0 2025-12-12 17:35:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:679d654f46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-679d654f46-xhzpc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali821934b603c [] [] }} ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-xhzpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--xhzpc-" Dec 12 17:36:11.570420 containerd[1532]: 2025-12-12 17:36:11.483 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-xhzpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" Dec 12 17:36:11.570420 containerd[1532]: 2025-12-12 17:36:11.509 [INFO][4352] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" HandleID="k8s-pod-network.d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Workload="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.509 [INFO][4352] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" HandleID="k8s-pod-network.d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Workload="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059aaa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-679d654f46-xhzpc", "timestamp":"2025-12-12 17:36:11.509728809 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.509 [INFO][4352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.509 [INFO][4352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.509 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.520 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" host="localhost" Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.525 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.529 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.532 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.534 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:11.570831 containerd[1532]: 2025-12-12 17:36:11.535 [INFO][4352] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" host="localhost" Dec 12 17:36:11.571039 containerd[1532]: 2025-12-12 17:36:11.538 [INFO][4352] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec Dec 12 17:36:11.571039 containerd[1532]: 2025-12-12 17:36:11.542 [INFO][4352] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" host="localhost" Dec 12 17:36:11.571039 containerd[1532]: 2025-12-12 17:36:11.548 [INFO][4352] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" host="localhost" Dec 12 17:36:11.571039 containerd[1532]: 2025-12-12 17:36:11.548 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" host="localhost" Dec 12 17:36:11.571039 containerd[1532]: 2025-12-12 17:36:11.548 [INFO][4352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:36:11.571039 containerd[1532]: 2025-12-12 17:36:11.548 [INFO][4352] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" HandleID="k8s-pod-network.d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Workload="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" Dec 12 17:36:11.571150 containerd[1532]: 2025-12-12 17:36:11.551 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-xhzpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0", GenerateName:"calico-apiserver-679d654f46-", Namespace:"calico-apiserver", SelfLink:"", UID:"808a23f5-b1ce-476f-8868-bf6cd3d8ccdd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679d654f46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-679d654f46-xhzpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali821934b603c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:11.571201 containerd[1532]: 2025-12-12 17:36:11.551 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-xhzpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" Dec 12 17:36:11.571201 containerd[1532]: 2025-12-12 17:36:11.551 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali821934b603c ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-xhzpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" Dec 12 17:36:11.571201 containerd[1532]: 2025-12-12 17:36:11.554 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-xhzpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" Dec 12 17:36:11.571256 containerd[1532]: 2025-12-12 17:36:11.556 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-xhzpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0", GenerateName:"calico-apiserver-679d654f46-", Namespace:"calico-apiserver", SelfLink:"", UID:"808a23f5-b1ce-476f-8868-bf6cd3d8ccdd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"679d654f46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec", Pod:"calico-apiserver-679d654f46-xhzpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali821934b603c", MAC:"f6:88:cc:7f:40:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:11.571305 containerd[1532]: 2025-12-12 17:36:11.566 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" Namespace="calico-apiserver" Pod="calico-apiserver-679d654f46-xhzpc" WorkloadEndpoint="localhost-k8s-calico--apiserver--679d654f46--xhzpc-eth0" Dec 12 17:36:11.594142 containerd[1532]: time="2025-12-12T17:36:11.594068640Z" level=info msg="connecting to shim d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec" address="unix:///run/containerd/s/e5e9931bb18f16a78490ea69ee84280b6f79b22dff72cd90201c382704173b16" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:11.610179 kubelet[2694]: E1212 17:36:11.610130 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-p7wc8" podUID="5851147d-b983-4c29-b7c9-3b2c46491cf9" Dec 12 17:36:11.612514 kubelet[2694]: E1212 17:36:11.612438 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" podUID="9bfd2383-be0f-4729-9e33-be9b6740c131" Dec 12 17:36:11.626923 systemd[1]: Started cri-containerd-d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec.scope - libcontainer container d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec. Dec 12 17:36:11.650707 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:11.673723 containerd[1532]: time="2025-12-12T17:36:11.673675682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-679d654f46-xhzpc,Uid:808a23f5-b1ce-476f-8868-bf6cd3d8ccdd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4d0829bef60c6ca203d3270a67c243c811b3d32ddfa1c4e5aebc0a497c99dec\"" Dec 12 17:36:11.675491 containerd[1532]: time="2025-12-12T17:36:11.675446413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:36:11.789687 systemd-networkd[1449]: calicd6f7cc49af: Gained IPv6LL Dec 12 17:36:11.883958 containerd[1532]: time="2025-12-12T17:36:11.883899875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:11.884936 containerd[1532]: time="2025-12-12T17:36:11.884836841Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:36:11.884936 containerd[1532]: time="2025-12-12T17:36:11.884886721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:36:11.885115 kubelet[2694]: E1212 17:36:11.885057 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:36:11.885115 kubelet[2694]: E1212 17:36:11.885109 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:36:11.885209 kubelet[2694]: E1212 17:36:11.885190 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-679d654f46-xhzpc_calico-apiserver(808a23f5-b1ce-476f-8868-bf6cd3d8ccdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:11.885259 kubelet[2694]: E1212 17:36:11.885225 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" podUID="808a23f5-b1ce-476f-8868-bf6cd3d8ccdd" Dec 12 17:36:12.449044 kubelet[2694]: E1212 17:36:12.448995 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:12.450102 containerd[1532]: time="2025-12-12T17:36:12.449689809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dtnp2,Uid:f61d5068-3614-49f1-a04f-ea15b840e763,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:12.452503 containerd[1532]: time="2025-12-12T17:36:12.451602660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pvz6g,Uid:32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75,Namespace:calico-system,Attempt:0,}" Dec 12 17:36:12.557762 systemd-networkd[1449]: cali79aa2c1726e: Gained IPv6LL Dec 12 17:36:12.568609 systemd-networkd[1449]: cali48fdafa9e6a: Link UP Dec 12 17:36:12.569306 systemd-networkd[1449]: cali48fdafa9e6a: Gained carrier Dec 12 17:36:12.583034 containerd[1532]: 2025-12-12 17:36:12.497 [INFO][4420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pvz6g-eth0 csi-node-driver- calico-system 32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75 746 0 2025-12-12 17:35:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pvz6g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali48fdafa9e6a [] [] }} ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Namespace="calico-system" Pod="csi-node-driver-pvz6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--pvz6g-" Dec 12 17:36:12.583034 containerd[1532]: 2025-12-12 17:36:12.497 [INFO][4420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Namespace="calico-system" Pod="csi-node-driver-pvz6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--pvz6g-eth0" Dec 12 17:36:12.583034 containerd[1532]: 2025-12-12 17:36:12.522 [INFO][4450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" HandleID="k8s-pod-network.7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Workload="localhost-k8s-csi--node--driver--pvz6g-eth0" Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.522 [INFO][4450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" HandleID="k8s-pod-network.7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Workload="localhost-k8s-csi--node--driver--pvz6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001375d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pvz6g", "timestamp":"2025-12-12 17:36:12.522807083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.523 [INFO][4450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.523 [INFO][4450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.523 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.532 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" host="localhost" Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.539 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.544 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.546 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.548 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:12.583460 containerd[1532]: 2025-12-12 17:36:12.548 [INFO][4450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" host="localhost" Dec 12 17:36:12.583693 containerd[1532]: 2025-12-12 17:36:12.549 [INFO][4450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4 Dec 12 17:36:12.583693 containerd[1532]: 2025-12-12 17:36:12.553 [INFO][4450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" host="localhost" Dec 12 17:36:12.583693 containerd[1532]: 2025-12-12 17:36:12.559 [INFO][4450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" host="localhost" Dec 12 17:36:12.583693 containerd[1532]: 2025-12-12 17:36:12.559 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" host="localhost" Dec 12 17:36:12.583693 containerd[1532]: 2025-12-12 17:36:12.559 [INFO][4450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:36:12.583693 containerd[1532]: 2025-12-12 17:36:12.560 [INFO][4450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" HandleID="k8s-pod-network.7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Workload="localhost-k8s-csi--node--driver--pvz6g-eth0" Dec 12 17:36:12.583814 containerd[1532]: 2025-12-12 17:36:12.563 [INFO][4420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Namespace="calico-system" Pod="csi-node-driver-pvz6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--pvz6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pvz6g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pvz6g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48fdafa9e6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:12.583867 containerd[1532]: 2025-12-12 17:36:12.564 [INFO][4420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Namespace="calico-system" Pod="csi-node-driver-pvz6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--pvz6g-eth0" Dec 12 17:36:12.583867 containerd[1532]: 2025-12-12 17:36:12.564 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48fdafa9e6a ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Namespace="calico-system" Pod="csi-node-driver-pvz6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--pvz6g-eth0" Dec 12 17:36:12.583867 containerd[1532]: 2025-12-12 17:36:12.570 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Namespace="calico-system" Pod="csi-node-driver-pvz6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--pvz6g-eth0" Dec 12 17:36:12.583939 containerd[1532]: 2025-12-12 17:36:12.571 [INFO][4420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Namespace="calico-system" Pod="csi-node-driver-pvz6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--pvz6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pvz6g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4", Pod:"csi-node-driver-pvz6g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48fdafa9e6a", MAC:"72:9b:ec:85:f1:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:12.583988 containerd[1532]: 2025-12-12 17:36:12.580 [INFO][4420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" Namespace="calico-system" Pod="csi-node-driver-pvz6g" WorkloadEndpoint="localhost-k8s-csi--node--driver--pvz6g-eth0" Dec 12 17:36:12.611723 containerd[1532]: time="2025-12-12T17:36:12.611655690Z" level=info msg="connecting to shim 7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4" address="unix:///run/containerd/s/493e592ac944e16e151632c29a037e22c60853333f82b5b79234fdecc31302d2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:12.628902 kubelet[2694]: E1212 17:36:12.628654 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" podUID="808a23f5-b1ce-476f-8868-bf6cd3d8ccdd" Dec 12 17:36:12.628902 kubelet[2694]: E1212 17:36:12.628716 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-p7wc8" podUID="5851147d-b983-4c29-b7c9-3b2c46491cf9" Dec 12 17:36:12.629482 kubelet[2694]: E1212 17:36:12.629260 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" podUID="9bfd2383-be0f-4729-9e33-be9b6740c131" Dec 12 17:36:12.632652 systemd[1]: Started cri-containerd-7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4.scope - libcontainer container 7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4. Dec 12 17:36:12.653794 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:12.682229 containerd[1532]: time="2025-12-12T17:36:12.682166829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pvz6g,Uid:32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d9c75537a97e0a638f4bfc5a0770e139ffd1e77c1e67e7a14cd7bed449b1eb4\"" Dec 12 17:36:12.685407 containerd[1532]: time="2025-12-12T17:36:12.685342528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:36:12.694186 systemd-networkd[1449]: calie22245ffd50: Link UP Dec 12 17:36:12.695236 systemd-networkd[1449]: calie22245ffd50: Gained carrier Dec 12 17:36:12.713289 containerd[1532]: 2025-12-12 17:36:12.497 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--dtnp2-eth0 coredns-66bc5c9577- kube-system f61d5068-3614-49f1-a04f-ea15b840e763 845 0 2025-12-12 17:35:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-dtnp2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie22245ffd50 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Namespace="kube-system" Pod="coredns-66bc5c9577-dtnp2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dtnp2-" Dec 12 17:36:12.713289 containerd[1532]: 2025-12-12 17:36:12.497 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Namespace="kube-system" Pod="coredns-66bc5c9577-dtnp2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" Dec 12 17:36:12.713289 containerd[1532]: 2025-12-12 17:36:12.524 [INFO][4443] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" HandleID="k8s-pod-network.35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Workload="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.524 [INFO][4443] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" HandleID="k8s-pod-network.35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Workload="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035cfd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-dtnp2", "timestamp":"2025-12-12 17:36:12.524355332 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.524 [INFO][4443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.559 [INFO][4443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.560 [INFO][4443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.633 [INFO][4443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" host="localhost" Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.644 [INFO][4443] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.655 [INFO][4443] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.662 [INFO][4443] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.665 [INFO][4443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:12.714297 containerd[1532]: 2025-12-12 17:36:12.665 [INFO][4443] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" host="localhost" Dec 12 17:36:12.714970 containerd[1532]: 2025-12-12 17:36:12.667 [INFO][4443] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e Dec 12 17:36:12.714970 containerd[1532]: 2025-12-12 17:36:12.676 [INFO][4443] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" host="localhost" Dec 12 17:36:12.714970 containerd[1532]: 2025-12-12 17:36:12.684 [INFO][4443] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" host="localhost" Dec 12 17:36:12.714970 containerd[1532]: 2025-12-12 17:36:12.685 [INFO][4443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" host="localhost" Dec 12 17:36:12.714970 containerd[1532]: 2025-12-12 17:36:12.685 [INFO][4443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:36:12.714970 containerd[1532]: 2025-12-12 17:36:12.686 [INFO][4443] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" HandleID="k8s-pod-network.35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Workload="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" Dec 12 17:36:12.715095 containerd[1532]: 2025-12-12 17:36:12.691 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Namespace="kube-system" Pod="coredns-66bc5c9577-dtnp2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dtnp2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f61d5068-3614-49f1-a04f-ea15b840e763", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-dtnp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie22245ffd50", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:12.715095 containerd[1532]: 2025-12-12 17:36:12.691 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Namespace="kube-system" Pod="coredns-66bc5c9577-dtnp2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" Dec 12 17:36:12.715095 containerd[1532]: 2025-12-12 17:36:12.691 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie22245ffd50 ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Namespace="kube-system" Pod="coredns-66bc5c9577-dtnp2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" Dec 12 17:36:12.715095 containerd[1532]: 2025-12-12 17:36:12.695 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Namespace="kube-system" Pod="coredns-66bc5c9577-dtnp2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" Dec 12 17:36:12.715095 containerd[1532]: 2025-12-12 17:36:12.696 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Namespace="kube-system" Pod="coredns-66bc5c9577-dtnp2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dtnp2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f61d5068-3614-49f1-a04f-ea15b840e763", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e", Pod:"coredns-66bc5c9577-dtnp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie22245ffd50", MAC:"be:6f:14:df:37:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:12.715095 containerd[1532]: 2025-12-12 17:36:12.709 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" Namespace="kube-system" Pod="coredns-66bc5c9577-dtnp2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dtnp2-eth0" Dec 12 17:36:12.748520 containerd[1532]: time="2025-12-12T17:36:12.748431902Z" level=info msg="connecting to shim 35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e" address="unix:///run/containerd/s/c7139e45a29e8ab8493a6fce8acc8ffb6199fac27435e75bdc7755c8a26fa489" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:12.749747 systemd-networkd[1449]: cali821934b603c: Gained IPv6LL Dec 12 17:36:12.779678 systemd[1]: Started cri-containerd-35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e.scope - libcontainer container 35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e. Dec 12 17:36:12.802226 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:12.831754 containerd[1532]: time="2025-12-12T17:36:12.831231194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dtnp2,Uid:f61d5068-3614-49f1-a04f-ea15b840e763,Namespace:kube-system,Attempt:0,} returns sandbox id \"35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e\"" Dec 12 17:36:12.832081 kubelet[2694]: E1212 17:36:12.832039 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:12.841495 containerd[1532]: time="2025-12-12T17:36:12.841226053Z" level=info msg="CreateContainer within sandbox \"35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:36:12.853811 containerd[1532]: time="2025-12-12T17:36:12.853758447Z" level=info msg="Container 2c2b190f3c3643e934467cc33f9b9af065db025d09f8a73070ed5be2a0225e41: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:12.871733 containerd[1532]: time="2025-12-12T17:36:12.871669674Z" level=info msg="CreateContainer within sandbox \"35669aa80f944a85e0deb06062d20005261879420c300851514b977cf1fb8c8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c2b190f3c3643e934467cc33f9b9af065db025d09f8a73070ed5be2a0225e41\"" Dec 12 17:36:12.872673 containerd[1532]: time="2025-12-12T17:36:12.872646640Z" level=info msg="StartContainer for \"2c2b190f3c3643e934467cc33f9b9af065db025d09f8a73070ed5be2a0225e41\"" Dec 12 17:36:12.874340 containerd[1532]: time="2025-12-12T17:36:12.874310209Z" level=info msg="connecting to shim 2c2b190f3c3643e934467cc33f9b9af065db025d09f8a73070ed5be2a0225e41" address="unix:///run/containerd/s/c7139e45a29e8ab8493a6fce8acc8ffb6199fac27435e75bdc7755c8a26fa489" protocol=ttrpc version=3 Dec 12 17:36:12.890430 containerd[1532]: time="2025-12-12T17:36:12.890377945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:12.891493 containerd[1532]: time="2025-12-12T17:36:12.891373311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:36:12.891568 containerd[1532]: time="2025-12-12T17:36:12.891443911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:36:12.891704 kubelet[2694]: E1212 17:36:12.891667 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:36:12.891750 kubelet[2694]: E1212 17:36:12.891716 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:36:12.891814 kubelet[2694]: E1212 17:36:12.891794 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-pvz6g_calico-system(32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:12.893162 containerd[1532]: time="2025-12-12T17:36:12.893122641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:36:12.897148 systemd[1]: Started cri-containerd-2c2b190f3c3643e934467cc33f9b9af065db025d09f8a73070ed5be2a0225e41.scope - libcontainer container 2c2b190f3c3643e934467cc33f9b9af065db025d09f8a73070ed5be2a0225e41. Dec 12 17:36:12.939672 containerd[1532]: time="2025-12-12T17:36:12.939596717Z" level=info msg="StartContainer for \"2c2b190f3c3643e934467cc33f9b9af065db025d09f8a73070ed5be2a0225e41\" returns successfully" Dec 12 17:36:13.091853 containerd[1532]: time="2025-12-12T17:36:13.091602169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:13.102715 containerd[1532]: time="2025-12-12T17:36:13.102654394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:36:13.102979 containerd[1532]: time="2025-12-12T17:36:13.102710274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:36:13.103182 kubelet[2694]: E1212 17:36:13.103126 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:36:13.103252 kubelet[2694]: E1212 17:36:13.103189 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:36:13.103275 kubelet[2694]: E1212 17:36:13.103260 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-pvz6g_calico-system(32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:13.103330 kubelet[2694]: E1212 17:36:13.103301 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvz6g" podUID="32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75" Dec 12 17:36:13.454490 containerd[1532]: time="2025-12-12T17:36:13.454438242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576cd48c6c-7tthz,Uid:1d63dbd1-5b8b-4814-ac94-b8332212f2ae,Namespace:calico-system,Attempt:0,}" Dec 12 17:36:13.455363 kubelet[2694]: E1212 17:36:13.455332 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:13.455711 containerd[1532]: time="2025-12-12T17:36:13.455664810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8vlzj,Uid:0cf243bf-b4d2-42a4-b6f2-a03640dafbdf,Namespace:kube-system,Attempt:0,}" Dec 12 17:36:13.578418 systemd-networkd[1449]: cali3567654ca31: Link UP Dec 12 17:36:13.579267 systemd-networkd[1449]: cali3567654ca31: Gained carrier Dec 12 17:36:13.608733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099665875.mount: Deactivated successfully. Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.498 [INFO][4616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--8vlzj-eth0 coredns-66bc5c9577- kube-system 0cf243bf-b4d2-42a4-b6f2-a03640dafbdf 843 0 2025-12-12 17:35:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-8vlzj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3567654ca31 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Namespace="kube-system" Pod="coredns-66bc5c9577-8vlzj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8vlzj-" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.498 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Namespace="kube-system" Pod="coredns-66bc5c9577-8vlzj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.529 [INFO][4640] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" HandleID="k8s-pod-network.757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Workload="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.529 [INFO][4640] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" HandleID="k8s-pod-network.757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Workload="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137550), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-8vlzj", "timestamp":"2025-12-12 17:36:13.529064917 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.529 [INFO][4640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.529 [INFO][4640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.529 [INFO][4640] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.543 [INFO][4640] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" host="localhost" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.549 [INFO][4640] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.555 [INFO][4640] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.557 [INFO][4640] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.560 [INFO][4640] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.560 [INFO][4640] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" host="localhost" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.562 [INFO][4640] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97 Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.566 [INFO][4640] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" host="localhost" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.572 [INFO][4640] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" host="localhost" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.573 [INFO][4640] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" host="localhost" Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.573 [INFO][4640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:36:13.614598 containerd[1532]: 2025-12-12 17:36:13.573 [INFO][4640] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" HandleID="k8s-pod-network.757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Workload="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" Dec 12 17:36:13.615424 containerd[1532]: 2025-12-12 17:36:13.575 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Namespace="kube-system" Pod="coredns-66bc5c9577-8vlzj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8vlzj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0cf243bf-b4d2-42a4-b6f2-a03640dafbdf", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-8vlzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3567654ca31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:13.615424 containerd[1532]: 2025-12-12 17:36:13.575 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Namespace="kube-system" Pod="coredns-66bc5c9577-8vlzj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" Dec 12 17:36:13.615424 containerd[1532]: 2025-12-12 17:36:13.575 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3567654ca31 ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Namespace="kube-system" Pod="coredns-66bc5c9577-8vlzj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" Dec 12 17:36:13.615424 containerd[1532]: 2025-12-12 17:36:13.580 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Namespace="kube-system" Pod="coredns-66bc5c9577-8vlzj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" Dec 12 17:36:13.615424 containerd[1532]: 2025-12-12 17:36:13.581 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Namespace="kube-system" Pod="coredns-66bc5c9577-8vlzj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8vlzj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0cf243bf-b4d2-42a4-b6f2-a03640dafbdf", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97", Pod:"coredns-66bc5c9577-8vlzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3567654ca31", MAC:"f6:61:1e:0f:1b:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:13.615424 containerd[1532]: 2025-12-12 17:36:13.611 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" Namespace="kube-system" Pod="coredns-66bc5c9577-8vlzj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8vlzj-eth0" Dec 12 17:36:13.629555 kubelet[2694]: E1212 17:36:13.629517 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:13.632817 kubelet[2694]: E1212 17:36:13.632774 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" podUID="808a23f5-b1ce-476f-8868-bf6cd3d8ccdd" Dec 12 17:36:13.633286 kubelet[2694]: E1212 17:36:13.633251 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvz6g" podUID="32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75" Dec 12 17:36:13.668124 kubelet[2694]: I1212 17:36:13.667976 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dtnp2" podStartSLOduration=40.667953406 podStartE2EDuration="40.667953406s" podCreationTimestamp="2025-12-12 17:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:36:13.647221245 +0000 UTC m=+47.300594647" watchObservedRunningTime="2025-12-12 17:36:13.667953406 +0000 UTC m=+47.321326808" Dec 12 17:36:13.671274 containerd[1532]: time="2025-12-12T17:36:13.671213825Z" level=info msg="connecting to shim 757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97" address="unix:///run/containerd/s/c0e21ea2b33c54697add20f30509faf501e83dfce240bebcc4761fe866d17b95" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:13.711271 systemd-networkd[1449]: cali184138e5336: Link UP Dec 12 17:36:13.711435 systemd-networkd[1449]: cali184138e5336: Gained carrier Dec 12 17:36:13.713167 systemd[1]: Started cri-containerd-757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97.scope - libcontainer container 757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97. Dec 12 17:36:13.740655 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.507 [INFO][4610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0 calico-kube-controllers-576cd48c6c- calico-system 1d63dbd1-5b8b-4814-ac94-b8332212f2ae 851 0 2025-12-12 17:35:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:576cd48c6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-576cd48c6c-7tthz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali184138e5336 [] [] }} ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Namespace="calico-system" Pod="calico-kube-controllers-576cd48c6c-7tthz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.507 [INFO][4610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Namespace="calico-system" Pod="calico-kube-controllers-576cd48c6c-7tthz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.540 [INFO][4646] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" HandleID="k8s-pod-network.a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Workload="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.540 [INFO][4646] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" HandleID="k8s-pod-network.a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Workload="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400018ed20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-576cd48c6c-7tthz", "timestamp":"2025-12-12 17:36:13.540558544 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.540 [INFO][4646] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.573 [INFO][4646] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.573 [INFO][4646] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.643 [INFO][4646] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" host="localhost" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.653 [INFO][4646] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.667 [INFO][4646] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.674 [INFO][4646] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.679 [INFO][4646] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.679 [INFO][4646] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" host="localhost" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.682 [INFO][4646] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503 Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.690 [INFO][4646] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" host="localhost" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.701 [INFO][4646] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" host="localhost" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.701 [INFO][4646] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" host="localhost" Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.701 [INFO][4646] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:36:13.742543 containerd[1532]: 2025-12-12 17:36:13.701 [INFO][4646] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" HandleID="k8s-pod-network.a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Workload="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" Dec 12 17:36:13.743603 containerd[1532]: 2025-12-12 17:36:13.706 [INFO][4610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Namespace="calico-system" Pod="calico-kube-controllers-576cd48c6c-7tthz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0", GenerateName:"calico-kube-controllers-576cd48c6c-", Namespace:"calico-system", SelfLink:"", UID:"1d63dbd1-5b8b-4814-ac94-b8332212f2ae", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576cd48c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-576cd48c6c-7tthz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali184138e5336", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:13.743603 containerd[1532]: 2025-12-12 17:36:13.706 [INFO][4610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Namespace="calico-system" Pod="calico-kube-controllers-576cd48c6c-7tthz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" Dec 12 17:36:13.743603 containerd[1532]: 2025-12-12 17:36:13.706 [INFO][4610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali184138e5336 ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Namespace="calico-system" Pod="calico-kube-controllers-576cd48c6c-7tthz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" Dec 12 17:36:13.743603 containerd[1532]: 2025-12-12 17:36:13.709 [INFO][4610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Namespace="calico-system" Pod="calico-kube-controllers-576cd48c6c-7tthz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" Dec 12 17:36:13.743603 containerd[1532]: 2025-12-12 17:36:13.709 [INFO][4610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Namespace="calico-system" Pod="calico-kube-controllers-576cd48c6c-7tthz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0", GenerateName:"calico-kube-controllers-576cd48c6c-", Namespace:"calico-system", SelfLink:"", UID:"1d63dbd1-5b8b-4814-ac94-b8332212f2ae", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 35, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"576cd48c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503", Pod:"calico-kube-controllers-576cd48c6c-7tthz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali184138e5336", MAC:"ae:3f:cd:89:c8:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:36:13.743603 containerd[1532]: 2025-12-12 17:36:13.732 [INFO][4610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" Namespace="calico-system" Pod="calico-kube-controllers-576cd48c6c-7tthz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--576cd48c6c--7tthz-eth0" Dec 12 17:36:13.884424 containerd[1532]: time="2025-12-12T17:36:13.884322546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8vlzj,Uid:0cf243bf-b4d2-42a4-b6f2-a03640dafbdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97\"" Dec 12 17:36:13.885327 kubelet[2694]: E1212 17:36:13.885273 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:13.918447 containerd[1532]: time="2025-12-12T17:36:13.918394624Z" level=info msg="CreateContainer within sandbox \"757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:36:13.921095 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:48628.service - OpenSSH per-connection server daemon (10.0.0.1:48628). Dec 12 17:36:13.944001 containerd[1532]: time="2025-12-12T17:36:13.943882093Z" level=info msg="Container 3ef009deb85e6a62162621e01e8049793b342bfd74a08c525b88803b2a9928d3: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:36:13.944681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3893119834.mount: Deactivated successfully. Dec 12 17:36:13.955381 containerd[1532]: time="2025-12-12T17:36:13.955285479Z" level=info msg="connecting to shim a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503" address="unix:///run/containerd/s/29f06422d7f45db0c23d56f3391a54a6596fa6ca83b5693b9fedeb5eda0d8c83" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:36:13.966641 systemd-networkd[1449]: calie22245ffd50: Gained IPv6LL Dec 12 17:36:13.986702 systemd[1]: Started cri-containerd-a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503.scope - libcontainer container a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503. Dec 12 17:36:14.001287 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:36:14.062288 containerd[1532]: time="2025-12-12T17:36:14.062176695Z" level=info msg="CreateContainer within sandbox \"757f0916a76ce0507dce6c33d87b9578a8a750f70ae2be5fa79fc8abb7e26e97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ef009deb85e6a62162621e01e8049793b342bfd74a08c525b88803b2a9928d3\"" Dec 12 17:36:14.062978 containerd[1532]: time="2025-12-12T17:36:14.062909379Z" level=info msg="StartContainer for \"3ef009deb85e6a62162621e01e8049793b342bfd74a08c525b88803b2a9928d3\"" Dec 12 17:36:14.064116 containerd[1532]: time="2025-12-12T17:36:14.064084386Z" level=info msg="connecting to shim 3ef009deb85e6a62162621e01e8049793b342bfd74a08c525b88803b2a9928d3" address="unix:///run/containerd/s/c0e21ea2b33c54697add20f30509faf501e83dfce240bebcc4761fe866d17b95" protocol=ttrpc version=3 Dec 12 17:36:14.066671 sshd[4726]: Accepted publickey for core from 10.0.0.1 port 48628 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:14.069587 sshd-session[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:14.070719 containerd[1532]: time="2025-12-12T17:36:14.070689144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-576cd48c6c-7tthz,Uid:1d63dbd1-5b8b-4814-ac94-b8332212f2ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"a71605f475c3177d85ad9e9739a117cc0db25ad89b1acadb4b33dbdab6687503\"" Dec 12 17:36:14.074582 systemd-logind[1517]: New session 9 of user core. Dec 12 17:36:14.078543 containerd[1532]: time="2025-12-12T17:36:14.078405388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:36:14.082250 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:36:14.086855 systemd[1]: Started cri-containerd-3ef009deb85e6a62162621e01e8049793b342bfd74a08c525b88803b2a9928d3.scope - libcontainer container 3ef009deb85e6a62162621e01e8049793b342bfd74a08c525b88803b2a9928d3. Dec 12 17:36:14.135437 containerd[1532]: time="2025-12-12T17:36:14.135389234Z" level=info msg="StartContainer for \"3ef009deb85e6a62162621e01e8049793b342bfd74a08c525b88803b2a9928d3\" returns successfully" Dec 12 17:36:14.221605 systemd-networkd[1449]: cali48fdafa9e6a: Gained IPv6LL Dec 12 17:36:14.308607 containerd[1532]: time="2025-12-12T17:36:14.308562944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:14.341380 sshd[4786]: Connection closed by 10.0.0.1 port 48628 Dec 12 17:36:14.343173 sshd-session[4726]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:14.347411 containerd[1532]: time="2025-12-12T17:36:14.345958598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:36:14.347411 containerd[1532]: time="2025-12-12T17:36:14.346019358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:36:14.347623 kubelet[2694]: E1212 17:36:14.347085 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:36:14.347623 kubelet[2694]: E1212 17:36:14.347145 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:36:14.347623 kubelet[2694]: E1212 17:36:14.347256 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-576cd48c6c-7tthz_calico-system(1d63dbd1-5b8b-4814-ac94-b8332212f2ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:14.347623 kubelet[2694]: E1212 17:36:14.347314 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-576cd48c6c-7tthz" podUID="1d63dbd1-5b8b-4814-ac94-b8332212f2ae" Dec 12 17:36:14.350547 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:36:14.350923 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:48628.service: Deactivated successfully. Dec 12 17:36:14.353515 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:36:14.355831 systemd-logind[1517]: Removed session 9. Dec 12 17:36:14.637371 kubelet[2694]: E1212 17:36:14.637134 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:14.642995 kubelet[2694]: E1212 17:36:14.642848 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:14.643462 kubelet[2694]: E1212 17:36:14.643409 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-576cd48c6c-7tthz" podUID="1d63dbd1-5b8b-4814-ac94-b8332212f2ae" Dec 12 17:36:14.644691 kubelet[2694]: E1212 17:36:14.644620 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvz6g" podUID="32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75" Dec 12 17:36:14.656861 kubelet[2694]: I1212 17:36:14.656133 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8vlzj" podStartSLOduration=41.656113211 podStartE2EDuration="41.656113211s" podCreationTimestamp="2025-12-12 17:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:36:14.654733963 +0000 UTC m=+48.308107365" watchObservedRunningTime="2025-12-12 17:36:14.656113211 +0000 UTC m=+48.309486573" Dec 12 17:36:15.309831 systemd-networkd[1449]: cali3567654ca31: Gained IPv6LL Dec 12 17:36:15.565955 systemd-networkd[1449]: cali184138e5336: Gained IPv6LL Dec 12 17:36:15.644926 kubelet[2694]: E1212 17:36:15.644842 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:15.646882 kubelet[2694]: E1212 17:36:15.645881 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:15.647982 kubelet[2694]: E1212 17:36:15.647702 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-576cd48c6c-7tthz" podUID="1d63dbd1-5b8b-4814-ac94-b8332212f2ae" Dec 12 17:36:16.221519 kubelet[2694]: I1212 17:36:16.221456 2694 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:36:16.221933 kubelet[2694]: E1212 17:36:16.221912 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:16.646804 kubelet[2694]: E1212 17:36:16.646702 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:16.647343 kubelet[2694]: E1212 17:36:16.646828 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:19.359181 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:48634.service - OpenSSH per-connection server daemon (10.0.0.1:48634). Dec 12 17:36:19.426178 sshd[4891]: Accepted publickey for core from 10.0.0.1 port 48634 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:19.427693 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:19.431668 systemd-logind[1517]: New session 10 of user core. Dec 12 17:36:19.441690 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:36:19.581445 sshd[4894]: Connection closed by 10.0.0.1 port 48634 Dec 12 17:36:19.581939 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:19.591554 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:48634.service: Deactivated successfully. Dec 12 17:36:19.595314 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:36:19.596929 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:36:19.601128 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:48646.service - OpenSSH per-connection server daemon (10.0.0.1:48646). Dec 12 17:36:19.602629 systemd-logind[1517]: Removed session 10. Dec 12 17:36:19.661804 sshd[4909]: Accepted publickey for core from 10.0.0.1 port 48646 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:19.664293 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:19.668899 systemd-logind[1517]: New session 11 of user core. Dec 12 17:36:19.679866 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:36:19.873501 sshd[4912]: Connection closed by 10.0.0.1 port 48646 Dec 12 17:36:19.873609 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:19.885220 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:48646.service: Deactivated successfully. Dec 12 17:36:19.890109 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:36:19.891671 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:36:19.896464 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:48656.service - OpenSSH per-connection server daemon (10.0.0.1:48656). Dec 12 17:36:19.897116 systemd-logind[1517]: Removed session 11. Dec 12 17:36:19.957504 sshd[4924]: Accepted publickey for core from 10.0.0.1 port 48656 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:19.959415 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:19.965402 systemd-logind[1517]: New session 12 of user core. Dec 12 17:36:19.977703 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:36:20.120889 sshd[4927]: Connection closed by 10.0.0.1 port 48656 Dec 12 17:36:20.121246 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:20.125040 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:48656.service: Deactivated successfully. Dec 12 17:36:20.126741 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:36:20.127390 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:36:20.128774 systemd-logind[1517]: Removed session 12. Dec 12 17:36:20.447876 containerd[1532]: time="2025-12-12T17:36:20.447721886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:36:20.639904 containerd[1532]: time="2025-12-12T17:36:20.639849407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:20.640894 containerd[1532]: time="2025-12-12T17:36:20.640863052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:36:20.640960 containerd[1532]: time="2025-12-12T17:36:20.640943532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:36:20.641414 kubelet[2694]: E1212 17:36:20.641116 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:36:20.641414 kubelet[2694]: E1212 17:36:20.641182 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:36:20.641414 kubelet[2694]: E1212 17:36:20.641257 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-f869957b4-2ss98_calico-system(011eaf78-fad0-4e3a-a45a-ec2b91088ae0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:20.642262 containerd[1532]: time="2025-12-12T17:36:20.642236859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:36:20.820776 containerd[1532]: time="2025-12-12T17:36:20.816324686Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:20.820776 containerd[1532]: time="2025-12-12T17:36:20.817309411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:36:20.820776 containerd[1532]: time="2025-12-12T17:36:20.817406451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:36:20.820968 kubelet[2694]: E1212 17:36:20.817590 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:36:20.820968 kubelet[2694]: E1212 17:36:20.817654 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:36:20.820968 kubelet[2694]: E1212 17:36:20.817764 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-f869957b4-2ss98_calico-system(011eaf78-fad0-4e3a-a45a-ec2b91088ae0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:20.821059 kubelet[2694]: E1212 17:36:20.817823 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f869957b4-2ss98" podUID="011eaf78-fad0-4e3a-a45a-ec2b91088ae0" Dec 12 17:36:25.136719 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:34248.service - OpenSSH per-connection server daemon (10.0.0.1:34248). Dec 12 17:36:25.198989 sshd[4950]: Accepted publickey for core from 10.0.0.1 port 34248 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:25.200184 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:25.204739 systemd-logind[1517]: New session 13 of user core. Dec 12 17:36:25.210651 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:36:25.342519 sshd[4953]: Connection closed by 10.0.0.1 port 34248 Dec 12 17:36:25.342883 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:25.354708 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:34248.service: Deactivated successfully. Dec 12 17:36:25.356399 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:36:25.357053 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:36:25.359462 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:34260.service - OpenSSH per-connection server daemon (10.0.0.1:34260). Dec 12 17:36:25.360302 systemd-logind[1517]: Removed session 13. Dec 12 17:36:25.428035 sshd[4967]: Accepted publickey for core from 10.0.0.1 port 34260 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:25.429714 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:25.436528 systemd-logind[1517]: New session 14 of user core. Dec 12 17:36:25.445644 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:36:25.655700 sshd[4970]: Connection closed by 10.0.0.1 port 34260 Dec 12 17:36:25.656212 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:25.668818 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:34260.service: Deactivated successfully. Dec 12 17:36:25.670772 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:36:25.671674 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:36:25.674535 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:34264.service - OpenSSH per-connection server daemon (10.0.0.1:34264). Dec 12 17:36:25.675267 systemd-logind[1517]: Removed session 14. Dec 12 17:36:25.738281 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 34264 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:25.739557 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:25.743735 systemd-logind[1517]: New session 15 of user core. Dec 12 17:36:25.753661 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:36:26.287546 sshd[4985]: Connection closed by 10.0.0.1 port 34264 Dec 12 17:36:26.288151 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:26.296055 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:34264.service: Deactivated successfully. Dec 12 17:36:26.298984 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:36:26.300042 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:36:26.305956 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:34270.service - OpenSSH per-connection server daemon (10.0.0.1:34270). Dec 12 17:36:26.309180 systemd-logind[1517]: Removed session 15. Dec 12 17:36:26.368704 sshd[5003]: Accepted publickey for core from 10.0.0.1 port 34270 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:26.370248 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:26.375404 systemd-logind[1517]: New session 16 of user core. Dec 12 17:36:26.382698 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:36:26.445510 containerd[1532]: time="2025-12-12T17:36:26.445459552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:36:26.647730 containerd[1532]: time="2025-12-12T17:36:26.647426214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:26.648774 containerd[1532]: time="2025-12-12T17:36:26.648731460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:36:26.649210 containerd[1532]: time="2025-12-12T17:36:26.648805581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:36:26.649260 kubelet[2694]: E1212 17:36:26.648934 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:36:26.649260 kubelet[2694]: E1212 17:36:26.648976 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:36:26.649260 kubelet[2694]: E1212 17:36:26.649172 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-679d654f46-dpchc_calico-apiserver(9bfd2383-be0f-4729-9e33-be9b6740c131): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:26.649260 kubelet[2694]: E1212 17:36:26.649220 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" podUID="9bfd2383-be0f-4729-9e33-be9b6740c131" Dec 12 17:36:26.649877 containerd[1532]: time="2025-12-12T17:36:26.649304503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:36:26.671612 sshd[5006]: Connection closed by 10.0.0.1 port 34270 Dec 12 17:36:26.672345 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:26.682994 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:34270.service: Deactivated successfully. Dec 12 17:36:26.685746 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:36:26.689553 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:36:26.691435 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:34282.service - OpenSSH per-connection server daemon (10.0.0.1:34282). Dec 12 17:36:26.693891 systemd-logind[1517]: Removed session 16. Dec 12 17:36:26.761215 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 34282 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:26.762557 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:26.767876 systemd-logind[1517]: New session 17 of user core. Dec 12 17:36:26.779678 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:36:26.852398 containerd[1532]: time="2025-12-12T17:36:26.852345450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:26.854918 containerd[1532]: time="2025-12-12T17:36:26.854837542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:36:26.855000 containerd[1532]: time="2025-12-12T17:36:26.854860662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:36:26.855228 kubelet[2694]: E1212 17:36:26.855185 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:36:26.855296 kubelet[2694]: E1212 17:36:26.855239 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:36:26.855479 kubelet[2694]: E1212 17:36:26.855317 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-576cd48c6c-7tthz_calico-system(1d63dbd1-5b8b-4814-ac94-b8332212f2ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:26.855479 kubelet[2694]: E1212 17:36:26.855354 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-576cd48c6c-7tthz" podUID="1d63dbd1-5b8b-4814-ac94-b8332212f2ae" Dec 12 17:36:26.899736 sshd[5022]: Connection closed by 10.0.0.1 port 34282 Dec 12 17:36:26.900013 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:26.903825 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:34282.service: Deactivated successfully. Dec 12 17:36:26.906780 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:36:26.907675 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:36:26.909249 systemd-logind[1517]: Removed session 17. Dec 12 17:36:27.445718 containerd[1532]: time="2025-12-12T17:36:27.445683713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:36:27.641116 containerd[1532]: time="2025-12-12T17:36:27.641061054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:27.642119 containerd[1532]: time="2025-12-12T17:36:27.642070219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:36:27.642179 containerd[1532]: time="2025-12-12T17:36:27.642136379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:36:27.642320 kubelet[2694]: E1212 17:36:27.642282 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:36:27.642383 kubelet[2694]: E1212 17:36:27.642333 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:36:27.642651 kubelet[2694]: E1212 17:36:27.642549 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-p7wc8_calico-system(5851147d-b983-4c29-b7c9-3b2c46491cf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:27.642651 kubelet[2694]: E1212 17:36:27.642599 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-p7wc8" podUID="5851147d-b983-4c29-b7c9-3b2c46491cf9" Dec 12 17:36:27.643042 containerd[1532]: time="2025-12-12T17:36:27.643019024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:36:27.856697 containerd[1532]: time="2025-12-12T17:36:27.856634372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:27.871369 containerd[1532]: time="2025-12-12T17:36:27.871304243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:36:27.871626 kubelet[2694]: E1212 17:36:27.871572 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:36:27.871626 kubelet[2694]: E1212 17:36:27.871621 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:36:27.872004 kubelet[2694]: E1212 17:36:27.871698 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-679d654f46-xhzpc_calico-apiserver(808a23f5-b1ce-476f-8868-bf6cd3d8ccdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:27.872004 kubelet[2694]: E1212 17:36:27.871729 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" podUID="808a23f5-b1ce-476f-8868-bf6cd3d8ccdd" Dec 12 17:36:27.872083 containerd[1532]: time="2025-12-12T17:36:27.871400163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:36:28.445318 containerd[1532]: time="2025-12-12T17:36:28.445221547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:36:28.642182 containerd[1532]: time="2025-12-12T17:36:28.642007286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:28.644114 containerd[1532]: time="2025-12-12T17:36:28.644023496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:36:28.644114 containerd[1532]: time="2025-12-12T17:36:28.644101296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:36:28.644320 kubelet[2694]: E1212 17:36:28.644283 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:36:28.644371 kubelet[2694]: E1212 17:36:28.644330 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:36:28.644427 kubelet[2694]: E1212 17:36:28.644408 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-pvz6g_calico-system(32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:28.646809 containerd[1532]: time="2025-12-12T17:36:28.646724308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:36:28.844258 containerd[1532]: time="2025-12-12T17:36:28.844201651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:36:28.845316 containerd[1532]: time="2025-12-12T17:36:28.845280816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:36:28.845383 containerd[1532]: time="2025-12-12T17:36:28.845367856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:36:28.845569 kubelet[2694]: E1212 17:36:28.845532 2694 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:36:28.845614 kubelet[2694]: E1212 17:36:28.845581 2694 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:36:28.845672 kubelet[2694]: E1212 17:36:28.845654 2694 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-pvz6g_calico-system(32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:36:28.845747 kubelet[2694]: E1212 17:36:28.845696 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pvz6g" podUID="32dc6de8-fc2a-4f91-8ad6-a6e0a252ed75" Dec 12 17:36:31.918788 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:44078.service - OpenSSH per-connection server daemon (10.0.0.1:44078). Dec 12 17:36:32.000347 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 44078 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:32.001875 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:32.009130 systemd-logind[1517]: New session 18 of user core. Dec 12 17:36:32.013669 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:36:32.162335 sshd[5043]: Connection closed by 10.0.0.1 port 44078 Dec 12 17:36:32.162171 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:32.166321 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:44078.service: Deactivated successfully. Dec 12 17:36:32.170654 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:36:32.172277 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:36:32.174386 systemd-logind[1517]: Removed session 18. Dec 12 17:36:34.446795 kubelet[2694]: E1212 17:36:34.446735 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f869957b4-2ss98" podUID="011eaf78-fad0-4e3a-a45a-ec2b91088ae0" Dec 12 17:36:37.185243 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:44086.service - OpenSSH per-connection server daemon (10.0.0.1:44086). Dec 12 17:36:37.257965 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 44086 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:36:37.259374 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:36:37.263691 systemd-logind[1517]: New session 19 of user core. Dec 12 17:36:37.273712 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:36:37.406794 sshd[5065]: Connection closed by 10.0.0.1 port 44086 Dec 12 17:36:37.407148 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Dec 12 17:36:37.410762 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:44086.service: Deactivated successfully. Dec 12 17:36:37.413099 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:36:37.414231 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:36:37.415658 systemd-logind[1517]: Removed session 19. Dec 12 17:36:37.444584 kubelet[2694]: E1212 17:36:37.444212 2694 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:36:37.446897 kubelet[2694]: E1212 17:36:37.446843 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-dpchc" podUID="9bfd2383-be0f-4729-9e33-be9b6740c131" Dec 12 17:36:38.445134 kubelet[2694]: E1212 17:36:38.445071 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-679d654f46-xhzpc" podUID="808a23f5-b1ce-476f-8868-bf6cd3d8ccdd" Dec 12 17:36:38.448527 kubelet[2694]: E1212 17:36:38.448264 2694 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-p7wc8" podUID="5851147d-b983-4c29-b7c9-3b2c46491cf9"