Jan 20 01:17:50.044653 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 20 01:17:50.044672 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Jan 19 22:57:34 -00 2026 Jan 20 01:17:50.044678 kernel: KASLR enabled Jan 20 01:17:50.044683 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 20 01:17:50.044686 kernel: printk: legacy bootconsole [pl11] enabled Jan 20 01:17:50.044691 kernel: efi: EFI v2.7 by EDK II Jan 20 01:17:50.044696 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 20 01:17:50.044701 kernel: random: crng init done Jan 20 01:17:50.044705 kernel: secureboot: Secure boot disabled Jan 20 01:17:50.044708 kernel: ACPI: Early table checksum verification disabled Jan 20 01:17:50.044712 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 20 01:17:50.044716 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:17:50.044720 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:17:50.044725 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 20 01:17:50.044731 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:17:50.044735 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:17:50.044739 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:17:50.044744 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:17:50.044748 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:17:50.044753 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:17:50.044757 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 20 01:17:50.044761 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:17:50.044766 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 20 01:17:50.044770 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 20 01:17:50.044774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 20 01:17:50.044778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 20 01:17:50.044782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 20 01:17:50.044787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 20 01:17:50.044791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 20 01:17:50.044795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 20 01:17:50.044800 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 20 01:17:50.044804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 20 01:17:50.044808 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 20 01:17:50.044813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 20 01:17:50.044817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 20 01:17:50.044821 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 20 01:17:50.044825 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 20 01:17:50.044829 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 20 01:17:50.044833 kernel: Zone ranges: Jan 20 01:17:50.044838 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 20 01:17:50.044844 kernel: DMA32 empty Jan 20 01:17:50.044849 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:17:50.044853 kernel: Device empty Jan 20 01:17:50.044858 kernel: Movable zone start for each node Jan 20 01:17:50.044862 kernel: Early memory node ranges Jan 20 01:17:50.044866 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 20 01:17:50.044872 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 20 01:17:50.044876 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 20 01:17:50.044880 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 20 01:17:50.044885 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 20 01:17:50.044889 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 20 01:17:50.044893 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:17:50.044898 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 20 01:17:50.044902 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 20 01:17:50.044906 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 20 01:17:50.044911 kernel: psci: probing for conduit method from ACPI. Jan 20 01:17:50.044915 kernel: psci: PSCIv1.3 detected in firmware. Jan 20 01:17:50.044919 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 01:17:50.044924 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 20 01:17:50.044929 kernel: psci: SMC Calling Convention v1.4 Jan 20 01:17:50.044933 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 20 01:17:50.044938 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 20 01:17:50.044942 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 20 01:17:50.044946 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 20 01:17:50.044951 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 01:17:50.044955 kernel: Detected PIPT I-cache on CPU0 Jan 20 01:17:50.044959 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 20 01:17:50.044964 kernel: CPU features: detected: GIC system register CPU interface Jan 20 01:17:50.044968 kernel: CPU features: detected: Spectre-v4 Jan 20 01:17:50.044973 kernel: CPU features: detected: Spectre-BHB Jan 20 01:17:50.044978 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 01:17:50.044982 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 01:17:50.044986 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 20 01:17:50.044991 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 01:17:50.044995 kernel: alternatives: applying boot alternatives Jan 20 01:17:50.045000 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3825f93c5ac04d887cdff1d17f655741a9a0c1b2ce2432debff700fb0368bb09 Jan 20 01:17:50.045005 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:17:50.045009 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:17:50.045014 kernel: Fallback order for Node 0: 0 Jan 20 01:17:50.045018 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 20 01:17:50.045023 kernel: Policy zone: Normal Jan 20 01:17:50.045028 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:17:50.045032 kernel: software IO TLB: area num 2. Jan 20 01:17:50.045036 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 20 01:17:50.045041 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 01:17:50.045045 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:17:50.045050 kernel: rcu: RCU event tracing is enabled. Jan 20 01:17:50.045055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 01:17:50.045059 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:17:50.045063 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:17:50.045068 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:17:50.045072 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 01:17:50.045078 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:17:50.045082 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:17:50.045087 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 01:17:50.045091 kernel: GICv3: 960 SPIs implemented Jan 20 01:17:50.045095 kernel: GICv3: 0 Extended SPIs implemented Jan 20 01:17:50.045099 kernel: Root IRQ handler: gic_handle_irq Jan 20 01:17:50.045104 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 20 01:17:50.045108 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 20 01:17:50.045113 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 20 01:17:50.045117 kernel: ITS: No ITS available, not enabling LPIs Jan 20 01:17:50.045121 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:17:50.045127 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 20 01:17:50.045131 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:17:50.045136 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 20 01:17:50.045140 kernel: Console: colour dummy device 80x25 Jan 20 01:17:50.045145 kernel: printk: legacy console [tty1] enabled Jan 20 01:17:50.045150 kernel: ACPI: Core revision 20240827 Jan 20 01:17:50.045154 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 20 01:17:50.045159 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:17:50.045163 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 01:17:50.045168 kernel: landlock: Up and running. Jan 20 01:17:50.045173 kernel: SELinux: Initializing. Jan 20 01:17:50.045178 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:17:50.045182 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:17:50.045187 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 20 01:17:50.045191 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 20 01:17:50.045199 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 20 01:17:50.045205 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:17:50.045209 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:17:50.045214 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 01:17:50.045219 kernel: Remapping and enabling EFI services. Jan 20 01:17:50.045224 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:17:50.045228 kernel: Detected PIPT I-cache on CPU1 Jan 20 01:17:50.045234 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 20 01:17:50.045239 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 20 01:17:50.045243 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 01:17:50.045248 kernel: SMP: Total of 2 processors activated. Jan 20 01:17:50.045253 kernel: CPU: All CPU(s) started at EL1 Jan 20 01:17:50.045258 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 01:17:50.045263 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 20 01:17:50.045268 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 01:17:50.045273 kernel: CPU features: detected: Common not Private translations Jan 20 01:17:50.045277 kernel: CPU features: detected: CRC32 instructions Jan 20 01:17:50.045282 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 20 01:17:50.045287 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 01:17:50.045292 kernel: CPU features: detected: LSE atomic instructions Jan 20 01:17:50.045312 kernel: CPU features: detected: Privileged Access Never Jan 20 01:17:50.045318 kernel: CPU features: detected: Speculation barrier (SB) Jan 20 01:17:50.045323 kernel: CPU features: detected: TLB range maintenance instructions Jan 20 01:17:50.045328 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 20 01:17:50.045332 kernel: CPU features: detected: Scalable Vector Extension Jan 20 01:17:50.045337 kernel: alternatives: applying system-wide alternatives Jan 20 01:17:50.045342 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 20 01:17:50.045347 kernel: SVE: maximum available vector length 16 bytes per vector Jan 20 01:17:50.045351 kernel: SVE: default vector length 16 bytes per vector Jan 20 01:17:50.045357 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 20 01:17:50.045362 kernel: devtmpfs: initialized Jan 20 01:17:50.045367 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:17:50.045372 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 01:17:50.045376 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 01:17:50.045381 kernel: 0 pages in range for non-PLT usage Jan 20 01:17:50.045386 kernel: 508400 pages in range for PLT usage Jan 20 01:17:50.045390 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:17:50.045395 kernel: SMBIOS 3.1.0 present. Jan 20 01:17:50.045400 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 20 01:17:50.045406 kernel: DMI: Memory slots populated: 2/2 Jan 20 01:17:50.045410 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:17:50.045415 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 01:17:50.045420 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 01:17:50.045425 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 01:17:50.045429 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:17:50.045434 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 20 01:17:50.045439 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:17:50.045444 kernel: cpuidle: using governor menu Jan 20 01:17:50.045449 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 01:17:50.045454 kernel: ASID allocator initialised with 32768 entries Jan 20 01:17:50.045459 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:17:50.045463 kernel: Serial: AMBA PL011 UART driver Jan 20 01:17:50.045468 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:17:50.045473 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:17:50.045477 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 01:17:50.045482 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 01:17:50.045488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:17:50.045492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:17:50.045497 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 01:17:50.045502 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 01:17:50.045507 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:17:50.045511 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:17:50.045516 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:17:50.045521 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:17:50.045525 kernel: ACPI: Interpreter enabled Jan 20 01:17:50.045531 kernel: ACPI: Using GIC for interrupt routing Jan 20 01:17:50.045536 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 20 01:17:50.045541 kernel: printk: legacy console [ttyAMA0] enabled Jan 20 01:17:50.045545 kernel: printk: legacy bootconsole [pl11] disabled Jan 20 01:17:50.045550 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 20 01:17:50.045555 kernel: ACPI: CPU0 has been hot-added Jan 20 01:17:50.045560 kernel: ACPI: CPU1 has been hot-added Jan 20 01:17:50.045564 kernel: iommu: Default domain type: Translated Jan 20 01:17:50.045569 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 01:17:50.045574 kernel: efivars: Registered efivars operations Jan 20 01:17:50.045579 kernel: vgaarb: loaded Jan 20 01:17:50.045584 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 01:17:50.045589 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:17:50.045593 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:17:50.045598 kernel: pnp: PnP ACPI init Jan 20 01:17:50.045603 kernel: pnp: PnP ACPI: found 0 devices Jan 20 01:17:50.045607 kernel: NET: Registered PF_INET protocol family Jan 20 01:17:50.045612 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:17:50.045617 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:17:50.045623 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:17:50.045627 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:17:50.045632 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:17:50.045637 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:17:50.045642 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:17:50.045646 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:17:50.045651 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:17:50.045656 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:17:50.045661 kernel: kvm [1]: HYP mode not available Jan 20 01:17:50.045666 kernel: Initialise system trusted keyrings Jan 20 01:17:50.045671 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:17:50.045676 kernel: Key type asymmetric registered Jan 20 01:17:50.045680 kernel: Asymmetric key parser 'x509' registered Jan 20 01:17:50.045685 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 20 01:17:50.045690 kernel: io scheduler mq-deadline registered Jan 20 01:17:50.045695 kernel: io scheduler kyber registered Jan 20 01:17:50.045699 kernel: io scheduler bfq registered Jan 20 01:17:50.045704 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:17:50.045709 kernel: thunder_xcv, ver 1.0 Jan 20 01:17:50.045714 kernel: thunder_bgx, ver 1.0 Jan 20 01:17:50.045719 kernel: nicpf, ver 1.0 Jan 20 01:17:50.045723 kernel: nicvf, ver 1.0 Jan 20 01:17:50.045827 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 01:17:50.045877 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T01:17:49 UTC (1768871869) Jan 20 01:17:50.045884 kernel: efifb: probing for efifb Jan 20 01:17:50.045890 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 20 01:17:50.045895 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 20 01:17:50.045900 kernel: efifb: scrolling: redraw Jan 20 01:17:50.045904 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 01:17:50.045909 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:17:50.045914 kernel: fb0: EFI VGA frame buffer device Jan 20 01:17:50.045919 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 20 01:17:50.045924 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 01:17:50.045928 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 20 01:17:50.045934 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:17:50.045939 kernel: watchdog: NMI not fully supported Jan 20 01:17:50.045943 kernel: watchdog: Hard watchdog permanently disabled Jan 20 01:17:50.045948 kernel: Segment Routing with IPv6 Jan 20 01:17:50.045953 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:17:50.045957 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:17:50.045962 kernel: Key type dns_resolver registered Jan 20 01:17:50.045967 kernel: registered taskstats version 1 Jan 20 01:17:50.045972 kernel: Loading compiled-in X.509 certificates Jan 20 01:17:50.045976 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3a8e96311e10f8204c78917500006eba3c60d834' Jan 20 01:17:50.045982 kernel: Demotion targets for Node 0: null Jan 20 01:17:50.045987 kernel: Key type .fscrypt registered Jan 20 01:17:50.045991 kernel: Key type fscrypt-provisioning registered Jan 20 01:17:50.045996 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:17:50.046001 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:17:50.046006 kernel: ima: No architecture policies found Jan 20 01:17:50.046010 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 01:17:50.046015 kernel: clk: Disabling unused clocks Jan 20 01:17:50.046020 kernel: PM: genpd: Disabling unused power domains Jan 20 01:17:50.046025 kernel: Warning: unable to open an initial console. Jan 20 01:17:50.046030 kernel: Freeing unused kernel memory: 39552K Jan 20 01:17:50.046035 kernel: Run /init as init process Jan 20 01:17:50.046040 kernel: with arguments: Jan 20 01:17:50.046044 kernel: /init Jan 20 01:17:50.046049 kernel: with environment: Jan 20 01:17:50.046053 kernel: HOME=/ Jan 20 01:17:50.046058 kernel: TERM=linux Jan 20 01:17:50.046064 systemd[1]: Successfully made /usr/ read-only. Jan 20 01:17:50.046071 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:17:50.046077 systemd[1]: Detected virtualization microsoft. Jan 20 01:17:50.046082 systemd[1]: Detected architecture arm64. Jan 20 01:17:50.046087 systemd[1]: Running in initrd. Jan 20 01:17:50.046092 systemd[1]: No hostname configured, using default hostname. Jan 20 01:17:50.046097 systemd[1]: Hostname set to . Jan 20 01:17:50.046102 systemd[1]: Initializing machine ID from random generator. Jan 20 01:17:50.046108 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:17:50.046113 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:17:50.046118 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:17:50.046124 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:17:50.046129 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:17:50.046134 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:17:50.046140 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:17:50.046147 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:17:50.046152 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:17:50.046157 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:17:50.046162 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:17:50.046168 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:17:50.046173 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:17:50.046178 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:17:50.046183 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:17:50.046189 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:17:50.046194 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:17:50.046199 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:17:50.046204 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 01:17:50.046210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:17:50.046215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:17:50.046220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:17:50.046225 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:17:50.046230 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:17:50.046236 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:17:50.046241 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:17:50.046247 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 01:17:50.046252 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:17:50.046257 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:17:50.046262 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:17:50.046277 systemd-journald[225]: Collecting audit messages is disabled. Jan 20 01:17:50.046290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:17:50.048158 systemd-journald[225]: Journal started Jan 20 01:17:50.048179 systemd-journald[225]: Runtime Journal (/run/log/journal/be8cfdf80da040cda67418be8d6c2c80) is 8M, max 78.3M, 70.3M free. Jan 20 01:17:50.048521 systemd-modules-load[227]: Inserted module 'overlay' Jan 20 01:17:50.060016 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:17:50.060019 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:17:50.085270 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:17:50.085285 kernel: Bridge firewalling registered Jan 20 01:17:50.068497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:17:50.085311 systemd-modules-load[227]: Inserted module 'br_netfilter' Jan 20 01:17:50.087692 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:17:50.096129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:17:50.103876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:17:50.113661 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:17:50.132773 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:17:50.137381 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:17:50.155449 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:17:50.167372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:17:50.172285 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:17:50.178428 systemd-tmpfiles[248]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 01:17:50.184946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:17:50.195736 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:17:50.207956 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:17:50.229546 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:17:50.239475 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:17:50.254340 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=3825f93c5ac04d887cdff1d17f655741a9a0c1b2ce2432debff700fb0368bb09 Jan 20 01:17:50.279181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:17:50.289972 systemd-resolved[262]: Positive Trust Anchors: Jan 20 01:17:50.289979 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:17:50.289999 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:17:50.291550 systemd-resolved[262]: Defaulting to hostname 'linux'. Jan 20 01:17:50.293018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:17:50.299589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:17:50.398311 kernel: SCSI subsystem initialized Jan 20 01:17:50.404327 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:17:50.411318 kernel: iscsi: registered transport (tcp) Jan 20 01:17:50.423203 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:17:50.423213 kernel: QLogic iSCSI HBA Driver Jan 20 01:17:50.435573 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:17:50.454631 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:17:50.460773 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:17:50.509422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:17:50.515415 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:17:50.575310 kernel: raid6: neonx8 gen() 18552 MB/s Jan 20 01:17:50.594304 kernel: raid6: neonx4 gen() 18569 MB/s Jan 20 01:17:50.614303 kernel: raid6: neonx2 gen() 17075 MB/s Jan 20 01:17:50.633386 kernel: raid6: neonx1 gen() 15024 MB/s Jan 20 01:17:50.652305 kernel: raid6: int64x8 gen() 10573 MB/s Jan 20 01:17:50.672304 kernel: raid6: int64x4 gen() 10609 MB/s Jan 20 01:17:50.691381 kernel: raid6: int64x2 gen() 8985 MB/s Jan 20 01:17:50.712417 kernel: raid6: int64x1 gen() 7004 MB/s Jan 20 01:17:50.712426 kernel: raid6: using algorithm neonx4 gen() 18569 MB/s Jan 20 01:17:50.735241 kernel: raid6: .... xor() 15145 MB/s, rmw enabled Jan 20 01:17:50.735276 kernel: raid6: using neon recovery algorithm Jan 20 01:17:50.742896 kernel: xor: measuring software checksum speed Jan 20 01:17:50.742911 kernel: 8regs : 28661 MB/sec Jan 20 01:17:50.745343 kernel: 32regs : 28830 MB/sec Jan 20 01:17:50.747719 kernel: arm64_neon : 37566 MB/sec Jan 20 01:17:50.750582 kernel: xor: using function: arm64_neon (37566 MB/sec) Jan 20 01:17:50.788317 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:17:50.793669 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:17:50.803247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:17:50.827950 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jan 20 01:17:50.832032 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:17:50.844493 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:17:50.867319 dracut-pre-trigger[488]: rd.md=0: removing MD RAID activation Jan 20 01:17:50.886883 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:17:50.896797 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:17:50.937865 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:17:50.948569 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:17:51.018653 kernel: hv_vmbus: Vmbus version:5.3 Jan 20 01:17:51.014149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:17:51.014256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:17:51.019697 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:17:51.074930 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 20 01:17:51.074947 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 01:17:51.074954 kernel: hv_vmbus: registering driver hid_hyperv Jan 20 01:17:51.074961 kernel: hv_vmbus: registering driver hv_netvsc Jan 20 01:17:51.074968 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 01:17:51.074975 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 20 01:17:51.074981 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 20 01:17:51.074988 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 20 01:17:51.041413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:17:51.089037 kernel: hv_vmbus: registering driver hv_storvsc Jan 20 01:17:51.091705 kernel: scsi host0: storvsc_host_t Jan 20 01:17:51.092331 kernel: scsi host1: storvsc_host_t Jan 20 01:17:51.092464 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 20 01:17:51.093304 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 20 01:17:51.096310 kernel: PTP clock support registered Jan 20 01:17:51.112148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:17:51.101373 kernel: hv_utils: Registering HyperV Utility Driver Jan 20 01:17:51.117575 kernel: hv_vmbus: registering driver hv_utils Jan 20 01:17:51.117587 kernel: hv_utils: TimeSync IC version 4.0 Jan 20 01:17:51.117592 kernel: hv_utils: Heartbeat IC version 3.0 Jan 20 01:17:51.117597 kernel: hv_utils: Shutdown IC version 3.2 Jan 20 01:17:51.117604 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 20 01:17:51.120633 systemd-journald[225]: Time jumped backwards, rotating. Jan 20 01:17:51.120668 kernel: hv_netvsc 002248b7-edef-0022-48b7-edef002248b7 eth0: VF slot 1 added Jan 20 01:17:51.120745 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 20 01:17:51.120808 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 20 01:17:51.077690 systemd-resolved[262]: Clock change detected. Flushing caches. Jan 20 01:17:51.129665 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 20 01:17:51.129777 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 20 01:17:51.135405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 01:17:51.141530 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 01:17:51.156239 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:17:51.156262 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 20 01:17:51.163254 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 20 01:17:51.163414 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:17:51.171368 kernel: hv_vmbus: registering driver hv_pci Jan 20 01:17:51.171394 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 20 01:17:51.171528 kernel: hv_pci 6f138e96-233e-40e6-a907-8c0168800aad: PCI VMBus probing: Using version 0x10004 Jan 20 01:17:51.181441 kernel: hv_pci 6f138e96-233e-40e6-a907-8c0168800aad: PCI host bridge to bus 233e:00 Jan 20 01:17:51.181565 kernel: pci_bus 233e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 20 01:17:51.186198 kernel: pci_bus 233e:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 01:17:51.194120 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#192 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:17:51.199364 kernel: pci 233e:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 20 01:17:51.201631 kernel: pci 233e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:17:51.211525 kernel: pci 233e:00:02.0: enabling Extended Tags Jan 20 01:17:51.221509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:17:51.221642 kernel: pci 233e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 233e:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 20 01:17:51.240578 kernel: pci_bus 233e:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 01:17:51.240724 kernel: pci 233e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 20 01:17:51.297127 kernel: mlx5_core 233e:00:02.0: enabling device (0000 -> 0002) Jan 20 01:17:51.304764 kernel: mlx5_core 233e:00:02.0: PTM is not supported by PCIe Jan 20 01:17:51.304849 kernel: mlx5_core 233e:00:02.0: firmware version: 16.30.5026 Jan 20 01:17:51.469492 kernel: hv_netvsc 002248b7-edef-0022-48b7-edef002248b7 eth0: VF registering: eth1 Jan 20 01:17:51.469680 kernel: mlx5_core 233e:00:02.0 eth1: joined to eth0 Jan 20 01:17:51.474854 kernel: mlx5_core 233e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 20 01:17:51.484514 kernel: mlx5_core 233e:00:02.0 enP9022s1: renamed from eth1 Jan 20 01:17:51.961640 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 20 01:17:52.082989 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 20 01:17:52.094278 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:17:52.258520 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 20 01:17:52.265567 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 20 01:17:52.283515 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:17:52.288212 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:17:52.296667 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:17:52.305675 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:17:52.314680 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:17:52.338065 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:17:52.357513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 01:17:52.369234 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:17:52.377219 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:17:53.391678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#19 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 20 01:17:53.408515 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:17:53.409253 disk-uuid[660]: The operation has completed successfully. Jan 20 01:17:53.480414 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:17:53.484142 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:17:53.502356 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:17:53.517515 sh[825]: Success Jan 20 01:17:53.594123 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:17:53.594161 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:17:53.601510 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 01:17:53.608519 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 01:17:54.060765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:17:54.066406 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:17:54.085048 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:17:54.111762 kernel: BTRFS: device fsid b1d239e4-c666-4b78-9d3d-e9e6443c3359 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (843) Jan 20 01:17:54.111788 kernel: BTRFS info (device dm-0): first mount of filesystem b1d239e4-c666-4b78-9d3d-e9e6443c3359 Jan 20 01:17:54.116142 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:17:54.645165 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:17:54.645238 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 01:17:54.712979 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:17:54.716868 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:17:54.724134 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:17:54.724761 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:17:54.748312 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:17:54.777525 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (874) Jan 20 01:17:54.787289 kernel: BTRFS info (device sda6): first mount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:17:54.787315 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:17:54.827222 kernel: BTRFS info (device sda6): turning on async discard Jan 20 01:17:54.827256 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 01:17:54.835553 kernel: BTRFS info (device sda6): last unmount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:17:54.834713 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:17:54.845565 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:17:54.861688 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:17:54.873739 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:17:54.889590 systemd-networkd[1010]: lo: Link UP Jan 20 01:17:54.889592 systemd-networkd[1010]: lo: Gained carrier Jan 20 01:17:54.890714 systemd-networkd[1010]: Enumeration completed Jan 20 01:17:54.891594 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:17:54.897104 systemd-networkd[1010]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:17:54.897106 systemd-networkd[1010]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:17:54.897329 systemd[1]: Reached target network.target - Network. Jan 20 01:17:54.967510 kernel: mlx5_core 233e:00:02.0 enP9022s1: Link up Jan 20 01:17:54.999521 kernel: hv_netvsc 002248b7-edef-0022-48b7-edef002248b7 eth0: Data path switched to VF: enP9022s1 Jan 20 01:17:54.999850 systemd-networkd[1010]: enP9022s1: Link UP Jan 20 01:17:54.999910 systemd-networkd[1010]: eth0: Link UP Jan 20 01:17:54.999973 systemd-networkd[1010]: eth0: Gained carrier Jan 20 01:17:54.999984 systemd-networkd[1010]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:17:55.020844 systemd-networkd[1010]: enP9022s1: Gained carrier Jan 20 01:17:55.032523 systemd-networkd[1010]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:17:56.941378 ignition[1013]: Ignition 2.22.0 Jan 20 01:17:56.941395 ignition[1013]: Stage: fetch-offline Jan 20 01:17:56.945030 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:17:56.941487 ignition[1013]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:17:56.951614 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 01:17:56.941493 ignition[1013]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:17:56.941572 ignition[1013]: parsed url from cmdline: "" Jan 20 01:17:56.941575 ignition[1013]: no config URL provided Jan 20 01:17:56.941578 ignition[1013]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:17:56.941582 ignition[1013]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:17:56.941586 ignition[1013]: failed to fetch config: resource requires networking Jan 20 01:17:56.941785 ignition[1013]: Ignition finished successfully Jan 20 01:17:56.982603 ignition[1022]: Ignition 2.22.0 Jan 20 01:17:56.982608 ignition[1022]: Stage: fetch Jan 20 01:17:56.982785 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:17:56.982795 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:17:56.982858 ignition[1022]: parsed url from cmdline: "" Jan 20 01:17:56.982860 ignition[1022]: no config URL provided Jan 20 01:17:56.982863 ignition[1022]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:17:56.982870 ignition[1022]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:17:56.982893 ignition[1022]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 20 01:17:57.011840 systemd-networkd[1010]: eth0: Gained IPv6LL Jan 20 01:17:57.051779 ignition[1022]: GET result: OK Jan 20 01:17:57.051832 ignition[1022]: config has been read from IMDS userdata Jan 20 01:17:57.051852 ignition[1022]: parsing config with SHA512: 8f9869003d3dc080e85e79d3e0c06cc1a5399fa6973af6091b190e550d1a8fe558c6dee1096d53e7b195146bd108a5dcde96a53fa626eacaf558bce85ed7c084 Jan 20 01:17:57.057690 unknown[1022]: fetched base config from "system" Jan 20 01:17:57.057695 unknown[1022]: fetched base config from "system" Jan 20 01:17:57.057945 ignition[1022]: fetch: fetch complete Jan 20 01:17:57.057704 unknown[1022]: fetched user config from "azure" Jan 20 01:17:57.057948 ignition[1022]: fetch: fetch passed Jan 20 01:17:57.065167 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 01:17:57.057992 ignition[1022]: Ignition finished successfully Jan 20 01:17:57.071854 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:17:57.108895 ignition[1028]: Ignition 2.22.0 Jan 20 01:17:57.108906 ignition[1028]: Stage: kargs Jan 20 01:17:57.112689 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:17:57.109067 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:17:57.117465 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:17:57.109073 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:17:57.109573 ignition[1028]: kargs: kargs passed Jan 20 01:17:57.109611 ignition[1028]: Ignition finished successfully Jan 20 01:17:57.147625 ignition[1034]: Ignition 2.22.0 Jan 20 01:17:57.147638 ignition[1034]: Stage: disks Jan 20 01:17:57.151220 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:17:57.147779 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:17:57.157271 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:17:57.147786 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:17:57.165397 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:17:57.148305 ignition[1034]: disks: disks passed Jan 20 01:17:57.174216 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:17:57.148337 ignition[1034]: Ignition finished successfully Jan 20 01:17:57.182699 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:17:57.191143 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:17:57.200491 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:17:57.329362 systemd-fsck[1043]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 20 01:17:57.332773 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:17:57.345576 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:17:57.697526 kernel: EXT4-fs (sda9): mounted filesystem e54ab1b7-d0c9-4deb-8673-6708a877d2de r/w with ordered data mode. Quota mode: none. Jan 20 01:17:57.697723 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:17:57.701484 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:17:57.735204 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:17:57.756010 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:17:57.776819 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1057) Jan 20 01:17:57.776848 kernel: BTRFS info (device sda6): first mount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:17:57.781133 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:17:57.781258 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 01:17:57.792482 kernel: BTRFS info (device sda6): turning on async discard Jan 20 01:17:57.793517 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 01:17:57.796982 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:17:57.797012 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:17:57.802864 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:17:57.813915 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:17:57.832943 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:17:58.641998 coreos-metadata[1059]: Jan 20 01:17:58.641 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:17:58.648161 coreos-metadata[1059]: Jan 20 01:17:58.647 INFO Fetch successful Jan 20 01:17:58.648161 coreos-metadata[1059]: Jan 20 01:17:58.647 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:17:58.659957 coreos-metadata[1059]: Jan 20 01:17:58.659 INFO Fetch successful Jan 20 01:17:58.663817 coreos-metadata[1059]: Jan 20 01:17:58.663 INFO wrote hostname ci-4459.2.2-n-d40ac89f78 to /sysroot/etc/hostname Jan 20 01:17:58.670316 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:17:59.199783 initrd-setup-root[1087]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:17:59.266061 initrd-setup-root[1094]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:17:59.301294 initrd-setup-root[1101]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:17:59.305882 initrd-setup-root[1108]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:18:00.787254 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:18:00.797207 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:18:00.805985 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:18:00.816824 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:18:00.828539 kernel: BTRFS info (device sda6): last unmount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:18:00.845941 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:18:00.854681 ignition[1176]: INFO : Ignition 2.22.0 Jan 20 01:18:00.854681 ignition[1176]: INFO : Stage: mount Jan 20 01:18:00.861267 ignition[1176]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:00.861267 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:00.861267 ignition[1176]: INFO : mount: mount passed Jan 20 01:18:00.861267 ignition[1176]: INFO : Ignition finished successfully Jan 20 01:18:00.859814 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:18:00.865746 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:18:00.891596 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:18:00.913525 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1188) Jan 20 01:18:00.923357 kernel: BTRFS info (device sda6): first mount of filesystem e20a00db-1b49-4e8f-8029-c59d826af381 Jan 20 01:18:00.923385 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:18:00.932826 kernel: BTRFS info (device sda6): turning on async discard Jan 20 01:18:00.932856 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 01:18:00.934114 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:18:00.962011 ignition[1205]: INFO : Ignition 2.22.0 Jan 20 01:18:00.962011 ignition[1205]: INFO : Stage: files Jan 20 01:18:00.967854 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:00.967854 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:00.967854 ignition[1205]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:18:00.984521 ignition[1205]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:18:00.984521 ignition[1205]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:18:01.100139 ignition[1205]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:18:01.105868 ignition[1205]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:18:01.105868 ignition[1205]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:18:01.100420 unknown[1205]: wrote ssh authorized keys file for user: core Jan 20 01:18:01.191385 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 01:18:01.199035 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 20 01:18:01.241266 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:18:01.446576 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 01:18:01.454368 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:18:01.454368 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:18:01.454368 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:18:01.454368 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:18:01.454368 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:18:01.454368 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:18:01.454368 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:18:01.454368 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:18:01.510724 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:18:01.510724 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:18:01.510724 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 20 01:18:01.510724 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 20 01:18:01.510724 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 20 01:18:01.510724 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 20 01:18:02.291627 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:18:02.689076 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 20 01:18:02.689076 ignition[1205]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:18:02.703625 ignition[1205]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:18:02.703625 ignition[1205]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:18:02.703625 ignition[1205]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:18:02.703625 ignition[1205]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:18:02.703625 ignition[1205]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:18:02.703625 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:18:02.703625 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:18:02.703625 ignition[1205]: INFO : files: files passed Jan 20 01:18:02.703625 ignition[1205]: INFO : Ignition finished successfully Jan 20 01:18:02.707440 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:18:02.715947 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:18:02.731455 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:18:02.978304 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:18:02.978304 initrd-setup-root-after-ignition[1234]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:18:02.996737 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:18:02.982384 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:18:02.990633 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:18:03.002088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:18:03.043836 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:18:03.043911 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:18:03.049168 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:18:03.056868 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:18:03.080897 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:18:03.081000 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:18:03.090407 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:18:03.099578 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:18:03.130154 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:18:03.141572 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:18:03.157214 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:18:03.162541 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:18:03.171747 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:18:03.180111 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:18:03.180195 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:18:03.192257 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:18:03.196806 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:18:03.205019 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:18:03.213314 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:18:03.221423 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:18:03.230700 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:18:03.239791 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:18:03.249203 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:18:03.259233 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:18:03.267390 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:18:03.276408 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:18:03.283398 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:18:03.283492 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:18:03.294242 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:18:03.298642 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:18:03.307001 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:18:03.307064 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:18:03.315597 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:18:03.315672 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:18:03.328259 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:18:03.328336 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:18:03.333754 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:18:03.333822 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:18:03.341444 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 01:18:03.341515 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:18:03.352667 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:18:03.426547 ignition[1259]: INFO : Ignition 2.22.0 Jan 20 01:18:03.426547 ignition[1259]: INFO : Stage: umount Jan 20 01:18:03.426547 ignition[1259]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:18:03.426547 ignition[1259]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:18:03.426547 ignition[1259]: INFO : umount: umount passed Jan 20 01:18:03.426547 ignition[1259]: INFO : Ignition finished successfully Jan 20 01:18:03.365554 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:18:03.365668 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:18:03.376120 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:18:03.395446 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:18:03.395575 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:18:03.406285 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:18:03.406359 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:18:03.423073 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:18:03.423861 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:18:03.423934 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:18:03.430590 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:18:03.430644 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:18:03.440774 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:18:03.440852 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:18:03.447524 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:18:03.447565 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:18:03.454574 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:18:03.454604 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:18:03.463559 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 01:18:03.463588 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 01:18:03.470427 systemd[1]: Stopped target network.target - Network. Jan 20 01:18:03.477385 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:18:03.477420 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:18:03.485628 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:18:03.494007 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:18:03.497371 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:18:03.502379 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:18:03.510227 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:18:03.518259 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:18:03.518281 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:18:03.526113 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:18:03.526130 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:18:03.534380 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:18:03.534411 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:18:03.542234 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:18:03.542261 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:18:03.549826 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:18:03.549855 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:18:03.557734 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:18:03.763200 kernel: hv_netvsc 002248b7-edef-0022-48b7-edef002248b7 eth0: Data path switched from VF: enP9022s1 Jan 20 01:18:03.565171 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:18:03.583919 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:18:03.584032 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:18:03.593058 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 01:18:03.593211 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:18:03.593306 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:18:03.606140 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 01:18:03.606578 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 01:18:03.614068 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:18:03.614115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:18:03.623213 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:18:03.635781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:18:03.635832 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:18:03.645007 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:18:03.645047 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:18:03.656792 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:18:03.656822 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:18:03.661240 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:18:03.661267 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:18:03.676034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:18:03.681511 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 01:18:03.681553 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:18:03.696934 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:18:03.697045 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:18:03.705466 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:18:03.705493 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:18:03.713314 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:18:03.713338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:18:03.721083 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:18:03.721123 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:18:03.732718 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:18:03.732756 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:18:03.750153 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:18:03.750190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:18:03.763735 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:18:03.777883 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 01:18:03.777928 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:18:03.787205 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:18:03.787246 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:18:03.797252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:18:03.797286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:03.806146 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 01:18:03.806184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 01:18:03.806216 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:18:04.003637 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 20 01:18:03.806420 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:18:03.806519 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:18:03.875309 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:18:03.875424 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:18:03.883992 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:18:03.893829 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:18:03.922418 systemd[1]: Switching root. Jan 20 01:18:04.030476 systemd-journald[225]: Journal stopped Jan 20 01:18:08.904581 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:18:08.904598 kernel: SELinux: policy capability open_perms=1 Jan 20 01:18:08.904606 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:18:08.904611 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:18:08.904616 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:18:08.904623 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:18:08.904629 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:18:08.904634 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:18:08.904640 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 01:18:08.904682 kernel: audit: type=1403 audit(1768871884.504:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:18:08.904690 systemd[1]: Successfully loaded SELinux policy in 90.530ms. Jan 20 01:18:08.904698 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.257ms. Jan 20 01:18:08.904706 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:18:08.904712 systemd[1]: Detected virtualization microsoft. Jan 20 01:18:08.904718 systemd[1]: Detected architecture arm64. Jan 20 01:18:08.904724 systemd[1]: Detected first boot. Jan 20 01:18:08.904732 systemd[1]: Hostname set to . Jan 20 01:18:08.904738 systemd[1]: Initializing machine ID from random generator. Jan 20 01:18:08.904744 zram_generator::config[1302]: No configuration found. Jan 20 01:18:08.904750 kernel: NET: Registered PF_VSOCK protocol family Jan 20 01:18:08.904756 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:18:08.904763 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 01:18:08.904769 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:18:08.904775 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:18:08.904782 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:18:08.904788 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:18:08.904794 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:18:08.904800 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:18:08.904806 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:18:08.904813 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:18:08.904820 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:18:08.904826 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:18:08.904832 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:18:08.904839 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:18:08.904845 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:18:08.904851 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:18:08.904857 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:18:08.904863 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:18:08.904871 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:18:08.904877 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 20 01:18:08.904885 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:18:08.904891 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:18:08.904897 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:18:08.904903 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:18:08.904909 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:18:08.904916 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:18:08.904922 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:18:08.904929 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:18:08.904935 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:18:08.904941 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:18:08.904947 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:18:08.904953 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:18:08.904961 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 01:18:08.904967 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:18:08.904974 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:18:08.904980 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:18:08.904987 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:18:08.904993 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:18:08.904999 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:18:08.905006 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:18:08.905012 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:18:08.905019 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:18:08.905025 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:18:08.905031 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:18:08.905038 systemd[1]: Reached target machines.target - Containers. Jan 20 01:18:08.905044 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:18:08.905051 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:18:08.905058 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:18:08.905064 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:18:08.905070 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:18:08.905076 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:18:08.905083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:18:08.905089 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:18:08.905095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:18:08.905101 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:18:08.905108 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:18:08.905116 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:18:08.905122 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:18:08.905128 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:18:08.905135 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:18:08.905141 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:18:08.905147 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:18:08.905154 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:18:08.905174 systemd-journald[1399]: Collecting audit messages is disabled. Jan 20 01:18:08.905189 systemd-journald[1399]: Journal started Jan 20 01:18:08.905203 systemd-journald[1399]: Runtime Journal (/run/log/journal/8895b33c2cd14fe898fb3e7625fceb47) is 8M, max 78.3M, 70.3M free. Jan 20 01:18:08.133137 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:18:08.143901 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 20 01:18:08.144254 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:18:08.144514 systemd[1]: systemd-journald.service: Consumed 2.282s CPU time. Jan 20 01:18:08.914005 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:18:08.931512 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 01:18:08.931540 kernel: ACPI: bus type drm_connector registered Jan 20 01:18:08.940869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:18:08.945516 kernel: fuse: init (API version 7.41) Jan 20 01:18:08.956371 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:18:08.956399 systemd[1]: Stopped verity-setup.service. Jan 20 01:18:08.970906 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:18:08.971509 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:18:08.977395 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:18:08.977575 kernel: loop: module loaded Jan 20 01:18:08.982401 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:18:08.986411 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:18:08.991045 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:18:08.995826 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:18:08.999871 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:18:09.004791 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:18:09.009946 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:18:09.010066 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:18:09.015133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:18:09.015254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:18:09.020213 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:18:09.020327 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:18:09.024916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:18:09.025033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:18:09.030306 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:18:09.030420 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:18:09.035086 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:18:09.035196 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:18:09.039895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:18:09.044776 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:18:09.049936 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:18:09.063077 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:18:09.070595 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:18:09.086634 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:18:09.093766 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:18:09.093794 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:18:09.098558 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 01:18:09.106636 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:18:09.110783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:18:09.122608 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:18:09.128653 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:18:09.134492 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:18:09.150476 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:18:09.154972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:18:09.157623 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:18:09.167936 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:18:09.174604 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:18:09.182417 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 01:18:09.189118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:18:09.195767 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:18:09.200310 systemd-journald[1399]: Time spent on flushing to /var/log/journal/8895b33c2cd14fe898fb3e7625fceb47 is 35.443ms for 930 entries. Jan 20 01:18:09.200310 systemd-journald[1399]: System Journal (/var/log/journal/8895b33c2cd14fe898fb3e7625fceb47) is 11.8M, max 2.6G, 2.6G free. Jan 20 01:18:09.291208 systemd-journald[1399]: Received client request to flush runtime journal. Jan 20 01:18:09.292207 systemd-journald[1399]: /var/log/journal/8895b33c2cd14fe898fb3e7625fceb47/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 20 01:18:09.292241 systemd-journald[1399]: Rotating system journal. Jan 20 01:18:09.205634 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:18:09.210606 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:18:09.220023 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:18:09.225548 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 01:18:09.245811 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:18:09.293509 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:18:09.307921 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:18:09.309046 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 01:18:09.362521 kernel: loop0: detected capacity change from 0 to 100632 Jan 20 01:18:09.384225 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:18:09.389445 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:18:09.477072 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jan 20 01:18:09.477386 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jan 20 01:18:09.479998 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:18:09.834882 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:18:09.840781 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:18:09.866588 systemd-udevd[1462]: Using default interface naming scheme 'v255'. Jan 20 01:18:10.085620 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:18:10.141514 kernel: loop1: detected capacity change from 0 to 27936 Jan 20 01:18:10.147396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:18:10.163549 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:18:10.188903 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 20 01:18:10.239290 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:18:10.275515 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:18:10.275567 kernel: loop2: detected capacity change from 0 to 200800 Jan 20 01:18:10.310526 kernel: hv_vmbus: registering driver hv_balloon Jan 20 01:18:10.310575 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#210 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:18:10.323666 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 20 01:18:10.328173 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 20 01:18:10.345526 kernel: loop3: detected capacity change from 0 to 119840 Jan 20 01:18:10.350571 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:18:10.367994 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:18:10.368121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:10.377336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:18:10.394381 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:18:10.416510 kernel: hv_vmbus: registering driver hyperv_fb Jan 20 01:18:10.426029 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 20 01:18:10.426078 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 20 01:18:10.429241 kernel: Console: switching to colour dummy device 80x25 Jan 20 01:18:10.432531 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:18:10.446939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:18:10.447269 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:10.454181 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 01:18:10.457639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:18:10.476788 kernel: loop4: detected capacity change from 0 to 100632 Jan 20 01:18:10.489511 kernel: loop5: detected capacity change from 0 to 27936 Jan 20 01:18:10.508508 kernel: loop6: detected capacity change from 0 to 200800 Jan 20 01:18:10.529507 kernel: loop7: detected capacity change from 0 to 119840 Jan 20 01:18:10.540795 (sd-merge)[1549]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 20 01:18:10.541130 (sd-merge)[1549]: Merged extensions into '/usr'. Jan 20 01:18:10.554514 systemd[1]: Reload requested from client PID 1439 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:18:10.554525 systemd[1]: Reloading... Jan 20 01:18:10.563778 systemd-networkd[1493]: lo: Link UP Jan 20 01:18:10.563968 systemd-networkd[1493]: lo: Gained carrier Jan 20 01:18:10.565019 systemd-networkd[1493]: Enumeration completed Jan 20 01:18:10.565328 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:18:10.565392 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:18:10.574513 kernel: MACsec IEEE 802.1AE Jan 20 01:18:10.615540 zram_generator::config[1638]: No configuration found. Jan 20 01:18:10.616532 kernel: mlx5_core 233e:00:02.0 enP9022s1: Link up Jan 20 01:18:10.636523 kernel: hv_netvsc 002248b7-edef-0022-48b7-edef002248b7 eth0: Data path switched to VF: enP9022s1 Jan 20 01:18:10.636856 systemd-networkd[1493]: enP9022s1: Link UP Jan 20 01:18:10.636983 systemd-networkd[1493]: eth0: Link UP Jan 20 01:18:10.636988 systemd-networkd[1493]: eth0: Gained carrier Jan 20 01:18:10.637000 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:18:10.640729 systemd-networkd[1493]: enP9022s1: Gained carrier Jan 20 01:18:10.646535 systemd-networkd[1493]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:18:10.774135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:18:10.779447 systemd[1]: Reloading finished in 224 ms. Jan 20 01:18:10.806362 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:18:10.811072 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:18:10.842335 systemd[1]: Starting ensure-sysext.service... Jan 20 01:18:10.848622 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:18:10.856619 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 01:18:10.862610 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:18:10.875117 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:18:10.887580 systemd[1]: Reload requested from client PID 1691 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:18:10.887593 systemd[1]: Reloading... Jan 20 01:18:10.892931 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:18:10.892956 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:18:10.893089 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:18:10.893227 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:18:10.893837 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:18:10.893995 systemd-tmpfiles[1695]: ACLs are not supported, ignoring. Jan 20 01:18:10.894028 systemd-tmpfiles[1695]: ACLs are not supported, ignoring. Jan 20 01:18:10.903414 systemd-tmpfiles[1695]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:18:10.903427 systemd-tmpfiles[1695]: Skipping /boot Jan 20 01:18:10.907873 systemd-tmpfiles[1695]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:18:10.907884 systemd-tmpfiles[1695]: Skipping /boot Jan 20 01:18:10.948658 zram_generator::config[1729]: No configuration found. Jan 20 01:18:11.098278 systemd[1]: Reloading finished in 210 ms. Jan 20 01:18:11.118872 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:18:11.124582 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 01:18:11.130593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:18:11.136910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:18:11.148485 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:18:11.156187 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:18:11.163686 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:18:11.171669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:18:11.177665 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:18:11.186741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:18:11.190730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:18:11.198320 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:18:11.207070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:18:11.215379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:18:11.215470 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:18:11.216206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:18:11.216407 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:18:11.221830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:18:11.221940 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:18:11.227703 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:18:11.227809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:18:11.237176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:18:11.239688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:18:11.251246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:18:11.259647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:18:11.263748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:18:11.263829 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:18:11.264668 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:18:11.270221 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:18:11.275888 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:18:11.276256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:18:11.282169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:18:11.282302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:18:11.282797 systemd-resolved[1795]: Positive Trust Anchors: Jan 20 01:18:11.283008 systemd-resolved[1795]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:18:11.283032 systemd-resolved[1795]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:18:11.288301 augenrules[1825]: No rules Jan 20 01:18:11.288652 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:18:11.288776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:18:11.293485 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:18:11.293625 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:18:11.304006 systemd-resolved[1795]: Using system hostname 'ci-4459.2.2-n-d40ac89f78'. Jan 20 01:18:11.306759 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:18:11.310552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:18:11.319666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:18:11.324797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:18:11.335542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:18:11.341440 augenrules[1836]: /sbin/augenrules: No change Jan 20 01:18:11.341789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:18:11.347140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:18:11.347235 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:18:11.347339 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:18:11.353075 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:18:11.353224 augenrules[1857]: No rules Jan 20 01:18:11.358268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:18:11.358420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:18:11.363643 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:18:11.363780 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:18:11.368467 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:18:11.368601 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:18:11.373263 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:18:11.373376 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:18:11.379103 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:18:11.379336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:18:11.385956 systemd[1]: Finished ensure-sysext.service. Jan 20 01:18:11.391428 systemd[1]: Reached target network.target - Network. Jan 20 01:18:11.395324 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:18:11.400207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:18:11.400257 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:18:12.127123 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:18:12.132581 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:18:12.432740 systemd-networkd[1493]: eth0: Gained IPv6LL Jan 20 01:18:12.434902 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:18:12.440171 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:18:18.281272 ldconfig[1434]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:18:18.288855 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:18:18.294959 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:18:18.309205 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:18:18.313845 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:18:18.318046 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:18:18.322911 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:18:18.328070 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:18:18.332457 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:18:18.337314 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:18:18.342304 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:18:18.342325 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:18:18.345797 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:18:18.384822 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:18:18.390264 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:18:18.395134 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 01:18:18.400100 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 01:18:18.405152 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 01:18:18.410712 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:18:18.414916 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 01:18:18.420012 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:18:18.424224 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:18:18.427932 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:18:18.431522 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:18:18.431541 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:18:18.462945 systemd[1]: Starting chronyd.service - NTP client/server... Jan 20 01:18:18.474583 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:18:18.481601 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 01:18:18.488685 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:18:18.495210 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:18:18.502176 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:18:18.514108 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:18:18.518103 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:18:18.520600 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 20 01:18:18.524740 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 20 01:18:18.525365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:18.532410 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:18:18.540619 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:18:18.546999 jq[1883]: false Jan 20 01:18:18.547268 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:18:18.559386 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:18:18.565766 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:18:18.573610 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:18:18.579290 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:18:18.579602 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:18:18.580625 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:18:18.592028 extend-filesystems[1884]: Found /dev/sda6 Jan 20 01:18:18.610458 kernel: hv_utils: KVP IC version 4.0 Jan 20 01:18:18.587061 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:18:18.595654 KVP[1885]: KVP starting; pid is:1885 Jan 20 01:18:18.598836 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:18:18.598559 chronyd[1875]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 20 01:18:18.611133 jq[1903]: true Jan 20 01:18:18.604243 KVP[1885]: KVP LIC Version: 3.1 Jan 20 01:18:18.611941 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:18:18.612195 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:18:18.613737 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:18:18.613877 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:18:18.622560 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:18:18.624686 extend-filesystems[1884]: Found /dev/sda9 Jan 20 01:18:18.630750 extend-filesystems[1884]: Checking size of /dev/sda9 Jan 20 01:18:18.629413 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:18:18.657231 (ntainerd)[1915]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:18:18.662648 jq[1914]: true Jan 20 01:18:18.663635 update_engine[1902]: I20260120 01:18:18.662256 1902 main.cc:92] Flatcar Update Engine starting Jan 20 01:18:18.675904 systemd-logind[1897]: New seat seat0. Jan 20 01:18:18.680692 systemd-logind[1897]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 20 01:18:18.680825 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:18:18.708413 chronyd[1875]: Timezone right/UTC failed leap second check, ignoring Jan 20 01:18:18.708544 chronyd[1875]: Loaded seccomp filter (level 2) Jan 20 01:18:18.708620 systemd[1]: Started chronyd.service - NTP client/server. Jan 20 01:18:18.714466 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:18:18.725541 tar[1911]: linux-arm64/LICENSE Jan 20 01:18:18.725734 tar[1911]: linux-arm64/helm Jan 20 01:18:18.732462 extend-filesystems[1884]: Old size kept for /dev/sda9 Jan 20 01:18:18.736410 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:18:18.736602 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:18:18.766511 sshd_keygen[1901]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:18:18.804412 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:18:18.809204 bash[1942]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:18:18.812489 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:18:18.822698 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:18:18.828091 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:18:18.832912 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 20 01:18:18.841526 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:18:18.841766 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:18:18.856798 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:18:18.865153 dbus-daemon[1878]: [system] SELinux support is enabled Jan 20 01:18:18.865278 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:18:18.871092 update_engine[1902]: I20260120 01:18:18.870385 1902 update_check_scheduler.cc:74] Next update check in 6m54s Jan 20 01:18:18.885026 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:18:18.885580 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:18:18.885948 dbus-daemon[1878]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 01:18:18.894643 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:18:18.894660 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:18:18.901720 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:18:18.909321 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:18:18.919117 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:18:18.931159 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 20 01:18:18.942748 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:18:18.951964 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:18:18.954355 coreos-metadata[1877]: Jan 20 01:18:18.954 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:18:18.958821 coreos-metadata[1877]: Jan 20 01:18:18.958 INFO Fetch successful Jan 20 01:18:18.958899 coreos-metadata[1877]: Jan 20 01:18:18.958 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 20 01:18:18.962622 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 20 01:18:18.972720 coreos-metadata[1877]: Jan 20 01:18:18.972 INFO Fetch successful Jan 20 01:18:18.972781 coreos-metadata[1877]: Jan 20 01:18:18.972 INFO Fetching http://168.63.129.16/machine/90c68d20-54f1-4215-a4a2-ddc20993ff77/d1c5844a%2Dd3b8%2D4a0b%2D91b3%2Da3aaed73d619.%5Fci%2D4459.2.2%2Dn%2Dd40ac89f78?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 20 01:18:18.980577 coreos-metadata[1877]: Jan 20 01:18:18.980 INFO Fetch successful Jan 20 01:18:18.980759 coreos-metadata[1877]: Jan 20 01:18:18.980 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:18:18.994506 coreos-metadata[1877]: Jan 20 01:18:18.991 INFO Fetch successful Jan 20 01:18:19.031120 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 01:18:19.037780 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:18:19.068828 locksmithd[2042]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:18:19.129666 tar[1911]: linux-arm64/README.md Jan 20 01:18:19.140435 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:18:19.366311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:19.441283 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:18:19.690106 containerd[1915]: time="2026-01-20T01:18:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 01:18:19.691964 containerd[1915]: time="2026-01-20T01:18:19.691929208Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 01:18:19.699043 containerd[1915]: time="2026-01-20T01:18:19.699013456Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.576µs" Jan 20 01:18:19.699043 containerd[1915]: time="2026-01-20T01:18:19.699036960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 01:18:19.699105 containerd[1915]: time="2026-01-20T01:18:19.699050416Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 01:18:19.699192 containerd[1915]: time="2026-01-20T01:18:19.699174576Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 01:18:19.699192 containerd[1915]: time="2026-01-20T01:18:19.699190504Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 01:18:19.699233 containerd[1915]: time="2026-01-20T01:18:19.699205800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699260 containerd[1915]: time="2026-01-20T01:18:19.699245768Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699260 containerd[1915]: time="2026-01-20T01:18:19.699255672Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699416 containerd[1915]: time="2026-01-20T01:18:19.699399176Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699416 containerd[1915]: time="2026-01-20T01:18:19.699413272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699456 containerd[1915]: time="2026-01-20T01:18:19.699420304Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699456 containerd[1915]: time="2026-01-20T01:18:19.699425528Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699484 containerd[1915]: time="2026-01-20T01:18:19.699476624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699684 containerd[1915]: time="2026-01-20T01:18:19.699659040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699714 containerd[1915]: time="2026-01-20T01:18:19.699691096Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:18:19.699714 containerd[1915]: time="2026-01-20T01:18:19.699698768Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 01:18:19.699752 containerd[1915]: time="2026-01-20T01:18:19.699726648Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 01:18:19.699890 containerd[1915]: time="2026-01-20T01:18:19.699861880Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 01:18:19.699926 containerd[1915]: time="2026-01-20T01:18:19.699914336Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:18:19.712288 containerd[1915]: time="2026-01-20T01:18:19.712254376Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 01:18:19.712351 containerd[1915]: time="2026-01-20T01:18:19.712305040Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 01:18:19.712351 containerd[1915]: time="2026-01-20T01:18:19.712316256Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 01:18:19.712351 containerd[1915]: time="2026-01-20T01:18:19.712325584Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 01:18:19.712351 containerd[1915]: time="2026-01-20T01:18:19.712333632Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 01:18:19.712351 containerd[1915]: time="2026-01-20T01:18:19.712341232Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 01:18:19.712351 containerd[1915]: time="2026-01-20T01:18:19.712352088Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 01:18:19.712428 containerd[1915]: time="2026-01-20T01:18:19.712359312Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 01:18:19.712428 containerd[1915]: time="2026-01-20T01:18:19.712366152Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 01:18:19.712428 containerd[1915]: time="2026-01-20T01:18:19.712372216Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 01:18:19.712428 containerd[1915]: time="2026-01-20T01:18:19.712377656Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 01:18:19.712428 containerd[1915]: time="2026-01-20T01:18:19.712385824Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 01:18:19.712485 containerd[1915]: time="2026-01-20T01:18:19.712479752Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712492720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712530272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712539064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712545912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712552712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712559536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712565944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712572896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712583832Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712590424Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712628176Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712639832Z" level=info msg="Start snapshots syncer" Jan 20 01:18:19.712716 containerd[1915]: time="2026-01-20T01:18:19.712653376Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 01:18:19.712890 containerd[1915]: time="2026-01-20T01:18:19.712846400Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 01:18:19.712890 containerd[1915]: time="2026-01-20T01:18:19.712879920Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.712909584Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.712995336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713009696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713016944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713023984Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713030976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713038296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713046032Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713060736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713068112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713074312Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713094160Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713102336Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:18:19.714220 containerd[1915]: time="2026-01-20T01:18:19.713107384Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713112408Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713116736Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713122240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713129424Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713139832Z" level=info msg="runtime interface created" Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713143008Z" level=info msg="created NRI interface" Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713148016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713155184Z" level=info msg="Connect containerd service" Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713167296Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:18:19.714396 containerd[1915]: time="2026-01-20T01:18:19.713685072Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:18:19.723463 kubelet[2069]: E0120 01:18:19.723425 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:18:19.725368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:18:19.725472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:18:19.725735 systemd[1]: kubelet.service: Consumed 488ms CPU time, 246.1M memory peak. Jan 20 01:18:20.276651 containerd[1915]: time="2026-01-20T01:18:20.276593616Z" level=info msg="Start subscribing containerd event" Jan 20 01:18:20.276769 containerd[1915]: time="2026-01-20T01:18:20.276667792Z" level=info msg="Start recovering state" Jan 20 01:18:20.276769 containerd[1915]: time="2026-01-20T01:18:20.276748800Z" level=info msg="Start event monitor" Jan 20 01:18:20.276769 containerd[1915]: time="2026-01-20T01:18:20.276761088Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:18:20.276769 containerd[1915]: time="2026-01-20T01:18:20.276766976Z" level=info msg="Start streaming server" Jan 20 01:18:20.276934 containerd[1915]: time="2026-01-20T01:18:20.276913128Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 01:18:20.276934 containerd[1915]: time="2026-01-20T01:18:20.276930384Z" level=info msg="runtime interface starting up..." Jan 20 01:18:20.276991 containerd[1915]: time="2026-01-20T01:18:20.276935096Z" level=info msg="starting plugins..." Jan 20 01:18:20.276991 containerd[1915]: time="2026-01-20T01:18:20.276954136Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 01:18:20.277815 containerd[1915]: time="2026-01-20T01:18:20.277789984Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:18:20.277840 containerd[1915]: time="2026-01-20T01:18:20.277832048Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:18:20.278457 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:18:20.280554 containerd[1915]: time="2026-01-20T01:18:20.280531816Z" level=info msg="containerd successfully booted in 0.590778s" Jan 20 01:18:20.283479 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:18:20.292541 systemd[1]: Startup finished in 1.688s (kernel) + 14.839s (initrd) + 15.876s (userspace) = 32.404s. Jan 20 01:18:20.799957 login[2027]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:20.800936 login[2032]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:20.811621 systemd-logind[1897]: New session 2 of user core. Jan 20 01:18:20.812860 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:18:20.814331 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:18:20.816304 systemd-logind[1897]: New session 1 of user core. Jan 20 01:18:20.847187 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:18:20.848924 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:18:20.855141 (systemd)[2096]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:18:20.856771 systemd-logind[1897]: New session c1 of user core. Jan 20 01:18:21.020435 systemd[2096]: Queued start job for default target default.target. Jan 20 01:18:21.028410 systemd[2096]: Created slice app.slice - User Application Slice. Jan 20 01:18:21.028436 systemd[2096]: Reached target paths.target - Paths. Jan 20 01:18:21.028462 systemd[2096]: Reached target timers.target - Timers. Jan 20 01:18:21.029286 systemd[2096]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:18:21.036043 systemd[2096]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:18:21.036085 systemd[2096]: Reached target sockets.target - Sockets. Jan 20 01:18:21.036123 systemd[2096]: Reached target basic.target - Basic System. Jan 20 01:18:21.036145 systemd[2096]: Reached target default.target - Main User Target. Jan 20 01:18:21.036162 systemd[2096]: Startup finished in 175ms. Jan 20 01:18:21.036216 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:18:21.037413 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:18:21.038941 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:18:21.647372 waagent[2043]: 2026-01-20T01:18:21.643483Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 20 01:18:21.647871 waagent[2043]: 2026-01-20T01:18:21.647837Z INFO Daemon Daemon OS: flatcar 4459.2.2 Jan 20 01:18:21.651167 waagent[2043]: 2026-01-20T01:18:21.651135Z INFO Daemon Daemon Python: 3.11.13 Jan 20 01:18:21.654373 waagent[2043]: 2026-01-20T01:18:21.654290Z INFO Daemon Daemon Run daemon Jan 20 01:18:21.657633 waagent[2043]: 2026-01-20T01:18:21.657601Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Jan 20 01:18:21.664155 waagent[2043]: 2026-01-20T01:18:21.664126Z INFO Daemon Daemon Using waagent for provisioning Jan 20 01:18:21.667985 waagent[2043]: 2026-01-20T01:18:21.667955Z INFO Daemon Daemon Activate resource disk Jan 20 01:18:21.671281 waagent[2043]: 2026-01-20T01:18:21.671252Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 20 01:18:21.679138 waagent[2043]: 2026-01-20T01:18:21.679107Z INFO Daemon Daemon Found device: None Jan 20 01:18:21.682334 waagent[2043]: 2026-01-20T01:18:21.682305Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 20 01:18:21.688197 waagent[2043]: 2026-01-20T01:18:21.688168Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 20 01:18:21.696243 waagent[2043]: 2026-01-20T01:18:21.696213Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 01:18:21.700226 waagent[2043]: 2026-01-20T01:18:21.700198Z INFO Daemon Daemon Running default provisioning handler Jan 20 01:18:21.708415 waagent[2043]: 2026-01-20T01:18:21.708381Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 20 01:18:21.718085 waagent[2043]: 2026-01-20T01:18:21.718054Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 20 01:18:21.724879 waagent[2043]: 2026-01-20T01:18:21.724852Z INFO Daemon Daemon cloud-init is enabled: False Jan 20 01:18:21.728475 waagent[2043]: 2026-01-20T01:18:21.728446Z INFO Daemon Daemon Copying ovf-env.xml Jan 20 01:18:21.801416 waagent[2043]: 2026-01-20T01:18:21.801383Z INFO Daemon Daemon Successfully mounted dvd Jan 20 01:18:21.841629 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 20 01:18:21.843522 waagent[2043]: 2026-01-20T01:18:21.843391Z INFO Daemon Daemon Detect protocol endpoint Jan 20 01:18:21.846933 waagent[2043]: 2026-01-20T01:18:21.846902Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 01:18:21.850920 waagent[2043]: 2026-01-20T01:18:21.850894Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 20 01:18:21.855466 waagent[2043]: 2026-01-20T01:18:21.855444Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 20 01:18:21.859177 waagent[2043]: 2026-01-20T01:18:21.859150Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 20 01:18:21.862656 waagent[2043]: 2026-01-20T01:18:21.862634Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 20 01:18:21.935017 waagent[2043]: 2026-01-20T01:18:21.934984Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 20 01:18:21.939634 waagent[2043]: 2026-01-20T01:18:21.939616Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 20 01:18:21.943383 waagent[2043]: 2026-01-20T01:18:21.943360Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 20 01:18:22.054766 waagent[2043]: 2026-01-20T01:18:22.054717Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 20 01:18:22.059519 waagent[2043]: 2026-01-20T01:18:22.059479Z INFO Daemon Daemon Forcing an update of the goal state. Jan 20 01:18:22.065966 waagent[2043]: 2026-01-20T01:18:22.065932Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 01:18:22.083480 waagent[2043]: 2026-01-20T01:18:22.083453Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 20 01:18:22.087628 waagent[2043]: 2026-01-20T01:18:22.087597Z INFO Daemon Jan 20 01:18:22.089726 waagent[2043]: 2026-01-20T01:18:22.089699Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 368a40fc-7485-4d4e-9b21-8e3808180673 eTag: 15106171810082559505 source: Fabric] Jan 20 01:18:22.097724 waagent[2043]: 2026-01-20T01:18:22.097697Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 20 01:18:22.102614 waagent[2043]: 2026-01-20T01:18:22.102588Z INFO Daemon Jan 20 01:18:22.104716 waagent[2043]: 2026-01-20T01:18:22.104692Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 20 01:18:22.112559 waagent[2043]: 2026-01-20T01:18:22.112533Z INFO Daemon Daemon Downloading artifacts profile blob Jan 20 01:18:22.165740 waagent[2043]: 2026-01-20T01:18:22.165695Z INFO Daemon Downloaded certificate {'thumbprint': '4AF552A4492492341E68C28716B2E2F11C78B645', 'hasPrivateKey': True} Jan 20 01:18:22.172475 waagent[2043]: 2026-01-20T01:18:22.172443Z INFO Daemon Fetch goal state completed Jan 20 01:18:22.206347 waagent[2043]: 2026-01-20T01:18:22.206280Z INFO Daemon Daemon Starting provisioning Jan 20 01:18:22.209879 waagent[2043]: 2026-01-20T01:18:22.209851Z INFO Daemon Daemon Handle ovf-env.xml. Jan 20 01:18:22.213304 waagent[2043]: 2026-01-20T01:18:22.213281Z INFO Daemon Daemon Set hostname [ci-4459.2.2-n-d40ac89f78] Jan 20 01:18:22.282064 waagent[2043]: 2026-01-20T01:18:22.282025Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-n-d40ac89f78] Jan 20 01:18:22.286482 waagent[2043]: 2026-01-20T01:18:22.286375Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 20 01:18:22.290779 waagent[2043]: 2026-01-20T01:18:22.290750Z INFO Daemon Daemon Primary interface is [eth0] Jan 20 01:18:22.299831 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:18:22.299836 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:18:22.299870 systemd-networkd[1493]: eth0: DHCP lease lost Jan 20 01:18:22.300774 waagent[2043]: 2026-01-20T01:18:22.300736Z INFO Daemon Daemon Create user account if not exists Jan 20 01:18:22.304594 waagent[2043]: 2026-01-20T01:18:22.304564Z INFO Daemon Daemon User core already exists, skip useradd Jan 20 01:18:22.308707 waagent[2043]: 2026-01-20T01:18:22.308683Z INFO Daemon Daemon Configure sudoer Jan 20 01:18:22.314895 waagent[2043]: 2026-01-20T01:18:22.314856Z INFO Daemon Daemon Configure sshd Jan 20 01:18:22.320225 waagent[2043]: 2026-01-20T01:18:22.320188Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 20 01:18:22.329069 waagent[2043]: 2026-01-20T01:18:22.329034Z INFO Daemon Daemon Deploy ssh public key. Jan 20 01:18:22.334547 systemd-networkd[1493]: eth0: DHCPv4 address 10.200.20.20/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:18:23.475938 waagent[2043]: 2026-01-20T01:18:23.472593Z INFO Daemon Daemon Provisioning complete Jan 20 01:18:23.487640 waagent[2043]: 2026-01-20T01:18:23.487608Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 20 01:18:23.492123 waagent[2043]: 2026-01-20T01:18:23.492089Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 20 01:18:23.499376 waagent[2043]: 2026-01-20T01:18:23.499347Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 20 01:18:23.594979 waagent[2146]: 2026-01-20T01:18:23.594931Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 20 01:18:23.596267 waagent[2146]: 2026-01-20T01:18:23.595305Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Jan 20 01:18:23.596267 waagent[2146]: 2026-01-20T01:18:23.595360Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 20 01:18:23.596267 waagent[2146]: 2026-01-20T01:18:23.595398Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 20 01:18:23.669042 waagent[2146]: 2026-01-20T01:18:23.669003Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 20 01:18:23.669272 waagent[2146]: 2026-01-20T01:18:23.669243Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:18:23.669382 waagent[2146]: 2026-01-20T01:18:23.669357Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:18:23.674717 waagent[2146]: 2026-01-20T01:18:23.674677Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 01:18:23.678991 waagent[2146]: 2026-01-20T01:18:23.678962Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 20 01:18:23.679385 waagent[2146]: 2026-01-20T01:18:23.679355Z INFO ExtHandler Jan 20 01:18:23.679535 waagent[2146]: 2026-01-20T01:18:23.679481Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e076f5e9-f135-4363-ab55-39663f1398bb eTag: 15106171810082559505 source: Fabric] Jan 20 01:18:23.679839 waagent[2146]: 2026-01-20T01:18:23.679810Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 01:18:23.680316 waagent[2146]: 2026-01-20T01:18:23.680284Z INFO ExtHandler Jan 20 01:18:23.680414 waagent[2146]: 2026-01-20T01:18:23.680394Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 20 01:18:23.683673 waagent[2146]: 2026-01-20T01:18:23.683648Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 01:18:23.774058 waagent[2146]: 2026-01-20T01:18:23.773973Z INFO ExtHandler Downloaded certificate {'thumbprint': '4AF552A4492492341E68C28716B2E2F11C78B645', 'hasPrivateKey': True} Jan 20 01:18:23.774944 waagent[2146]: 2026-01-20T01:18:23.774909Z INFO ExtHandler Fetch goal state completed Jan 20 01:18:23.786525 waagent[2146]: 2026-01-20T01:18:23.786251Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 20 01:18:23.789354 waagent[2146]: 2026-01-20T01:18:23.789311Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2146 Jan 20 01:18:23.789449 waagent[2146]: 2026-01-20T01:18:23.789423Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 20 01:18:23.789711 waagent[2146]: 2026-01-20T01:18:23.789682Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 20 01:18:23.790763 waagent[2146]: 2026-01-20T01:18:23.790731Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Jan 20 01:18:23.791064 waagent[2146]: 2026-01-20T01:18:23.791036Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 20 01:18:23.791167 waagent[2146]: 2026-01-20T01:18:23.791146Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 20 01:18:23.791601 waagent[2146]: 2026-01-20T01:18:23.791571Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 20 01:18:23.800639 waagent[2146]: 2026-01-20T01:18:23.800613Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 20 01:18:23.800758 waagent[2146]: 2026-01-20T01:18:23.800731Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 20 01:18:23.804847 waagent[2146]: 2026-01-20T01:18:23.804823Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 20 01:18:23.809010 systemd[1]: Reload requested from client PID 2161 ('systemctl') (unit waagent.service)... Jan 20 01:18:23.809193 systemd[1]: Reloading... Jan 20 01:18:23.885523 zram_generator::config[2200]: No configuration found. Jan 20 01:18:24.027268 systemd[1]: Reloading finished in 217 ms. Jan 20 01:18:24.050147 waagent[2146]: 2026-01-20T01:18:24.050087Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 20 01:18:24.050225 waagent[2146]: 2026-01-20T01:18:24.050204Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 20 01:18:24.116096 waagent[2146]: 2026-01-20T01:18:24.115471Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 20 01:18:24.116096 waagent[2146]: 2026-01-20T01:18:24.115719Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 20 01:18:24.116294 waagent[2146]: 2026-01-20T01:18:24.116251Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 20 01:18:24.116379 waagent[2146]: 2026-01-20T01:18:24.116344Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:18:24.116430 waagent[2146]: 2026-01-20T01:18:24.116411Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:18:24.116615 waagent[2146]: 2026-01-20T01:18:24.116583Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 20 01:18:24.117021 waagent[2146]: 2026-01-20T01:18:24.116985Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 20 01:18:24.117141 waagent[2146]: 2026-01-20T01:18:24.117111Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 20 01:18:24.117141 waagent[2146]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 20 01:18:24.117141 waagent[2146]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 20 01:18:24.117141 waagent[2146]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 20 01:18:24.117141 waagent[2146]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:18:24.117141 waagent[2146]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:18:24.117141 waagent[2146]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:18:24.117487 waagent[2146]: 2026-01-20T01:18:24.117454Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 20 01:18:24.117597 waagent[2146]: 2026-01-20T01:18:24.117558Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 20 01:18:24.117700 waagent[2146]: 2026-01-20T01:18:24.117674Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:18:24.117743 waagent[2146]: 2026-01-20T01:18:24.117730Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:18:24.117840 waagent[2146]: 2026-01-20T01:18:24.117818Z INFO EnvHandler ExtHandler Configure routes Jan 20 01:18:24.117878 waagent[2146]: 2026-01-20T01:18:24.117862Z INFO EnvHandler ExtHandler Gateway:None Jan 20 01:18:24.117902 waagent[2146]: 2026-01-20T01:18:24.117890Z INFO EnvHandler ExtHandler Routes:None Jan 20 01:18:24.118510 waagent[2146]: 2026-01-20T01:18:24.118468Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 20 01:18:24.118662 waagent[2146]: 2026-01-20T01:18:24.118563Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 20 01:18:24.118662 waagent[2146]: 2026-01-20T01:18:24.118606Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 20 01:18:24.127304 waagent[2146]: 2026-01-20T01:18:24.127267Z INFO ExtHandler ExtHandler Jan 20 01:18:24.127353 waagent[2146]: 2026-01-20T01:18:24.127326Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 78609229-14c4-42ca-9797-76fb38adc474 correlation 8e8960be-7a5a-435c-8dba-b391a8a5a579 created: 2026-01-20T01:17:13.938660Z] Jan 20 01:18:24.127600 waagent[2146]: 2026-01-20T01:18:24.127570Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 01:18:24.127974 waagent[2146]: 2026-01-20T01:18:24.127950Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 20 01:18:24.141350 waagent[2146]: 2026-01-20T01:18:24.141307Z INFO MonitorHandler ExtHandler Network interfaces: Jan 20 01:18:24.141350 waagent[2146]: Executing ['ip', '-a', '-o', 'link']: Jan 20 01:18:24.141350 waagent[2146]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 20 01:18:24.141350 waagent[2146]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:ed:ef brd ff:ff:ff:ff:ff:ff Jan 20 01:18:24.141350 waagent[2146]: 3: enP9022s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:ed:ef brd ff:ff:ff:ff:ff:ff\ altname enP9022p0s2 Jan 20 01:18:24.141350 waagent[2146]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 20 01:18:24.141350 waagent[2146]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 20 01:18:24.141350 waagent[2146]: 2: eth0 inet 10.200.20.20/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 20 01:18:24.141350 waagent[2146]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 20 01:18:24.141350 waagent[2146]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 20 01:18:24.141350 waagent[2146]: 2: eth0 inet6 fe80::222:48ff:feb7:edef/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 20 01:18:24.161477 waagent[2146]: 2026-01-20T01:18:24.161433Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 20 01:18:24.161477 waagent[2146]: Try `iptables -h' or 'iptables --help' for more information.) Jan 20 01:18:24.161768 waagent[2146]: 2026-01-20T01:18:24.161737Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E2045676-A949-46BD-9139-BF953CBE4E5D;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 20 01:18:24.177112 waagent[2146]: 2026-01-20T01:18:24.176522Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 20 01:18:24.177112 waagent[2146]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:24.177112 waagent[2146]: pkts bytes target prot opt in out source destination Jan 20 01:18:24.177112 waagent[2146]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:24.177112 waagent[2146]: pkts bytes target prot opt in out source destination Jan 20 01:18:24.177112 waagent[2146]: Chain OUTPUT (policy ACCEPT 9 packets, 1180 bytes) Jan 20 01:18:24.177112 waagent[2146]: pkts bytes target prot opt in out source destination Jan 20 01:18:24.177112 waagent[2146]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 01:18:24.177112 waagent[2146]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 01:18:24.177112 waagent[2146]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 01:18:24.178746 waagent[2146]: 2026-01-20T01:18:24.178715Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 20 01:18:24.178746 waagent[2146]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:24.178746 waagent[2146]: pkts bytes target prot opt in out source destination Jan 20 01:18:24.178746 waagent[2146]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:18:24.178746 waagent[2146]: pkts bytes target prot opt in out source destination Jan 20 01:18:24.178746 waagent[2146]: Chain OUTPUT (policy ACCEPT 9 packets, 1180 bytes) Jan 20 01:18:24.178746 waagent[2146]: pkts bytes target prot opt in out source destination Jan 20 01:18:24.178746 waagent[2146]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 01:18:24.178746 waagent[2146]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 01:18:24.178746 waagent[2146]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 01:18:24.179094 waagent[2146]: 2026-01-20T01:18:24.179071Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 20 01:18:29.935268 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:18:29.937013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:30.059247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:30.065837 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:18:30.132930 kubelet[2295]: E0120 01:18:30.132878 2295 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:18:30.135331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:18:30.135433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:18:30.135816 systemd[1]: kubelet.service: Consumed 106ms CPU time, 106.7M memory peak. Jan 20 01:18:30.314471 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:18:30.316253 systemd[1]: Started sshd@0-10.200.20.20:22-10.200.16.10:45404.service - OpenSSH per-connection server daemon (10.200.16.10:45404). Jan 20 01:18:31.145090 sshd[2303]: Accepted publickey for core from 10.200.16.10 port 45404 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:18:31.145999 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:31.149457 systemd-logind[1897]: New session 3 of user core. Jan 20 01:18:31.156698 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:18:31.584693 systemd[1]: Started sshd@1-10.200.20.20:22-10.200.16.10:45408.service - OpenSSH per-connection server daemon (10.200.16.10:45408). Jan 20 01:18:32.077893 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 45408 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:18:32.078887 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:32.082180 systemd-logind[1897]: New session 4 of user core. Jan 20 01:18:32.088607 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:18:32.427256 sshd[2312]: Connection closed by 10.200.16.10 port 45408 Jan 20 01:18:32.427651 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Jan 20 01:18:32.430430 systemd[1]: sshd@1-10.200.20.20:22-10.200.16.10:45408.service: Deactivated successfully. Jan 20 01:18:32.431703 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:18:32.432221 systemd-logind[1897]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:18:32.433275 systemd-logind[1897]: Removed session 4. Jan 20 01:18:32.509696 systemd[1]: Started sshd@2-10.200.20.20:22-10.200.16.10:45418.service - OpenSSH per-connection server daemon (10.200.16.10:45418). Jan 20 01:18:32.967121 sshd[2318]: Accepted publickey for core from 10.200.16.10 port 45418 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:18:32.968035 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:32.970991 systemd-logind[1897]: New session 5 of user core. Jan 20 01:18:32.973598 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:18:33.294753 sshd[2321]: Connection closed by 10.200.16.10 port 45418 Jan 20 01:18:33.295247 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Jan 20 01:18:33.298595 systemd[1]: sshd@2-10.200.20.20:22-10.200.16.10:45418.service: Deactivated successfully. Jan 20 01:18:33.298816 systemd-logind[1897]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:18:33.300544 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:18:33.302140 systemd-logind[1897]: Removed session 5. Jan 20 01:18:33.383692 systemd[1]: Started sshd@3-10.200.20.20:22-10.200.16.10:45426.service - OpenSSH per-connection server daemon (10.200.16.10:45426). Jan 20 01:18:33.870222 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 45426 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:18:33.872465 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:33.876155 systemd-logind[1897]: New session 6 of user core. Jan 20 01:18:33.881614 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:18:34.220880 sshd[2330]: Connection closed by 10.200.16.10 port 45426 Jan 20 01:18:34.221274 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Jan 20 01:18:34.224368 systemd-logind[1897]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:18:34.225070 systemd[1]: sshd@3-10.200.20.20:22-10.200.16.10:45426.service: Deactivated successfully. Jan 20 01:18:34.226808 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:18:34.228253 systemd-logind[1897]: Removed session 6. Jan 20 01:18:34.306682 systemd[1]: Started sshd@4-10.200.20.20:22-10.200.16.10:45430.service - OpenSSH per-connection server daemon (10.200.16.10:45430). Jan 20 01:18:34.756319 sshd[2336]: Accepted publickey for core from 10.200.16.10 port 45430 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:18:34.756950 sshd-session[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:34.760430 systemd-logind[1897]: New session 7 of user core. Jan 20 01:18:34.766587 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:18:35.189153 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:18:35.189364 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:18:35.232969 sudo[2340]: pam_unix(sudo:session): session closed for user root Jan 20 01:18:35.311069 sshd[2339]: Connection closed by 10.200.16.10 port 45430 Jan 20 01:18:35.310355 sshd-session[2336]: pam_unix(sshd:session): session closed for user core Jan 20 01:18:35.313742 systemd-logind[1897]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:18:35.314279 systemd[1]: sshd@4-10.200.20.20:22-10.200.16.10:45430.service: Deactivated successfully. Jan 20 01:18:35.315582 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:18:35.316877 systemd-logind[1897]: Removed session 7. Jan 20 01:18:35.389610 systemd[1]: Started sshd@5-10.200.20.20:22-10.200.16.10:45436.service - OpenSSH per-connection server daemon (10.200.16.10:45436). Jan 20 01:18:35.839888 sshd[2346]: Accepted publickey for core from 10.200.16.10 port 45436 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:18:35.840871 sshd-session[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:35.844385 systemd-logind[1897]: New session 8 of user core. Jan 20 01:18:35.851618 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:18:36.094468 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:18:36.094704 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:18:36.099789 sudo[2351]: pam_unix(sudo:session): session closed for user root Jan 20 01:18:36.103034 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 01:18:36.103217 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:18:36.110548 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:18:36.137695 augenrules[2373]: No rules Jan 20 01:18:36.138699 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:18:36.138974 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:18:36.139923 sudo[2350]: pam_unix(sudo:session): session closed for user root Jan 20 01:18:36.216484 sshd[2349]: Connection closed by 10.200.16.10 port 45436 Jan 20 01:18:36.217585 sshd-session[2346]: pam_unix(sshd:session): session closed for user core Jan 20 01:18:36.219416 systemd[1]: sshd@5-10.200.20.20:22-10.200.16.10:45436.service: Deactivated successfully. Jan 20 01:18:36.220607 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:18:36.222130 systemd-logind[1897]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:18:36.223127 systemd-logind[1897]: Removed session 8. Jan 20 01:18:36.304356 systemd[1]: Started sshd@6-10.200.20.20:22-10.200.16.10:45442.service - OpenSSH per-connection server daemon (10.200.16.10:45442). Jan 20 01:18:36.795423 sshd[2382]: Accepted publickey for core from 10.200.16.10 port 45442 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:18:36.796408 sshd-session[2382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:18:36.799663 systemd-logind[1897]: New session 9 of user core. Jan 20 01:18:36.806633 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:18:37.070899 sudo[2386]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:18:37.071096 sudo[2386]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:18:38.325689 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:18:38.340878 (dockerd)[2404]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:18:38.567910 dockerd[2404]: time="2026-01-20T01:18:38.567859752Z" level=info msg="Starting up" Jan 20 01:18:38.568712 dockerd[2404]: time="2026-01-20T01:18:38.568689480Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 01:18:38.577347 dockerd[2404]: time="2026-01-20T01:18:38.577145808Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 01:18:38.671863 systemd[1]: var-lib-docker-metacopy\x2dcheck3919714055-merged.mount: Deactivated successfully. Jan 20 01:18:38.686659 dockerd[2404]: time="2026-01-20T01:18:38.686609112Z" level=info msg="Loading containers: start." Jan 20 01:18:38.700527 kernel: Initializing XFRM netlink socket Jan 20 01:18:38.867013 systemd-networkd[1493]: docker0: Link UP Jan 20 01:18:38.888195 dockerd[2404]: time="2026-01-20T01:18:38.888168216Z" level=info msg="Loading containers: done." Jan 20 01:18:38.904846 dockerd[2404]: time="2026-01-20T01:18:38.904607032Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:18:38.904846 dockerd[2404]: time="2026-01-20T01:18:38.904657312Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 01:18:38.904846 dockerd[2404]: time="2026-01-20T01:18:38.904724264Z" level=info msg="Initializing buildkit" Jan 20 01:18:38.940207 dockerd[2404]: time="2026-01-20T01:18:38.940182568Z" level=info msg="Completed buildkit initialization" Jan 20 01:18:38.945228 dockerd[2404]: time="2026-01-20T01:18:38.945074984Z" level=info msg="Daemon has completed initialization" Jan 20 01:18:38.945374 dockerd[2404]: time="2026-01-20T01:18:38.945325680Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:18:38.945517 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:18:39.599904 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2453318412-merged.mount: Deactivated successfully. Jan 20 01:18:39.660375 containerd[1915]: time="2026-01-20T01:18:39.660259856Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 01:18:40.185122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:18:40.186512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:40.290368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:40.293048 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:18:40.398231 kubelet[2620]: E0120 01:18:40.398185 2620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:18:40.400075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:18:40.400173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:18:40.401570 systemd[1]: kubelet.service: Consumed 104ms CPU time, 106.8M memory peak. Jan 20 01:18:40.873318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661204768.mount: Deactivated successfully. Jan 20 01:18:42.059365 containerd[1915]: time="2026-01-20T01:18:42.058773672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:42.060443 containerd[1915]: time="2026-01-20T01:18:42.060421304Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 20 01:18:42.062929 containerd[1915]: time="2026-01-20T01:18:42.062910888Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:42.066080 containerd[1915]: time="2026-01-20T01:18:42.066048568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:42.066695 containerd[1915]: time="2026-01-20T01:18:42.066670904Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.40630944s" Jan 20 01:18:42.066780 containerd[1915]: time="2026-01-20T01:18:42.066769752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 20 01:18:42.067318 containerd[1915]: time="2026-01-20T01:18:42.067289984Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 01:18:42.512586 chronyd[1875]: Selected source PHC0 Jan 20 01:18:43.360198 containerd[1915]: time="2026-01-20T01:18:43.360149419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:43.362231 containerd[1915]: time="2026-01-20T01:18:43.362076570Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 20 01:18:43.364480 containerd[1915]: time="2026-01-20T01:18:43.364458479Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:43.367799 containerd[1915]: time="2026-01-20T01:18:43.367774924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:43.369850 containerd[1915]: time="2026-01-20T01:18:43.369259588Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.301941386s" Jan 20 01:18:43.369850 containerd[1915]: time="2026-01-20T01:18:43.369293571Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 20 01:18:43.370437 containerd[1915]: time="2026-01-20T01:18:43.370405325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 01:18:44.581055 containerd[1915]: time="2026-01-20T01:18:44.581003422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:44.584064 containerd[1915]: time="2026-01-20T01:18:44.583985910Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 20 01:18:44.586812 containerd[1915]: time="2026-01-20T01:18:44.586777198Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:44.593459 containerd[1915]: time="2026-01-20T01:18:44.593426030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:44.594099 containerd[1915]: time="2026-01-20T01:18:44.593901614Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.223135077s" Jan 20 01:18:44.594099 containerd[1915]: time="2026-01-20T01:18:44.593929734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 20 01:18:44.594455 containerd[1915]: time="2026-01-20T01:18:44.594407662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 01:18:45.850153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758655686.mount: Deactivated successfully. Jan 20 01:18:46.355331 containerd[1915]: time="2026-01-20T01:18:46.354931990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:46.356736 containerd[1915]: time="2026-01-20T01:18:46.356714766Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 20 01:18:46.359150 containerd[1915]: time="2026-01-20T01:18:46.359131846Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:46.361471 containerd[1915]: time="2026-01-20T01:18:46.361452070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:46.361720 containerd[1915]: time="2026-01-20T01:18:46.361693246Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.767146952s" Jan 20 01:18:46.361720 containerd[1915]: time="2026-01-20T01:18:46.361721494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 20 01:18:46.362523 containerd[1915]: time="2026-01-20T01:18:46.362425798Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 01:18:46.922462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724307556.mount: Deactivated successfully. Jan 20 01:18:48.157341 containerd[1915]: time="2026-01-20T01:18:48.157290366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:48.159124 containerd[1915]: time="2026-01-20T01:18:48.158960526Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 20 01:18:48.161413 containerd[1915]: time="2026-01-20T01:18:48.161390462Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:48.164853 containerd[1915]: time="2026-01-20T01:18:48.164828150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:48.165390 containerd[1915]: time="2026-01-20T01:18:48.165366670Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.802812912s" Jan 20 01:18:48.165475 containerd[1915]: time="2026-01-20T01:18:48.165463222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 20 01:18:48.166332 containerd[1915]: time="2026-01-20T01:18:48.166299526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 01:18:48.686583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687254836.mount: Deactivated successfully. Jan 20 01:18:48.700484 containerd[1915]: time="2026-01-20T01:18:48.700443966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:48.702454 containerd[1915]: time="2026-01-20T01:18:48.702428670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 20 01:18:48.704213 containerd[1915]: time="2026-01-20T01:18:48.704190574Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:48.707186 containerd[1915]: time="2026-01-20T01:18:48.707163030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:48.707935 containerd[1915]: time="2026-01-20T01:18:48.707912670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 541.576448ms" Jan 20 01:18:48.707956 containerd[1915]: time="2026-01-20T01:18:48.707939406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 20 01:18:48.708416 containerd[1915]: time="2026-01-20T01:18:48.708395150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 01:18:49.314365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312043396.mount: Deactivated successfully. Jan 20 01:18:50.435271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:18:50.437651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:50.540635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:50.547696 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:18:50.641891 kubelet[2814]: E0120 01:18:50.641833 2814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:18:50.644042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:18:50.644235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:18:50.644589 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.9M memory peak. Jan 20 01:18:53.753927 containerd[1915]: time="2026-01-20T01:18:53.753878410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:53.755993 containerd[1915]: time="2026-01-20T01:18:53.755967117Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 20 01:18:53.971178 containerd[1915]: time="2026-01-20T01:18:53.971127742Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:54.627198 containerd[1915]: time="2026-01-20T01:18:54.626641403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:18:54.627590 containerd[1915]: time="2026-01-20T01:18:54.627568381Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 5.919148327s" Jan 20 01:18:54.627644 containerd[1915]: time="2026-01-20T01:18:54.627592630Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 20 01:18:58.017360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:58.017821 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.9M memory peak. Jan 20 01:18:58.019487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:58.039749 systemd[1]: Reload requested from client PID 2853 ('systemctl') (unit session-9.scope)... Jan 20 01:18:58.039842 systemd[1]: Reloading... Jan 20 01:18:58.126542 zram_generator::config[2906]: No configuration found. Jan 20 01:18:58.271486 systemd[1]: Reloading finished in 231 ms. Jan 20 01:18:58.328022 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:18:58.328082 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:18:58.328276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:58.328311 systemd[1]: kubelet.service: Consumed 64ms CPU time, 94M memory peak. Jan 20 01:18:58.330078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:18:58.434908 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 20 01:18:58.548552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:18:58.554692 (kubelet)[2964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:18:58.693325 kubelet[2964]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:18:58.693325 kubelet[2964]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:18:58.693913 kubelet[2964]: I0120 01:18:58.693871 2964 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:18:59.382728 kubelet[2964]: I0120 01:18:59.382686 2964 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:18:59.382728 kubelet[2964]: I0120 01:18:59.382717 2964 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:18:59.383862 kubelet[2964]: I0120 01:18:59.383841 2964 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:18:59.383862 kubelet[2964]: I0120 01:18:59.383861 2964 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:18:59.384058 kubelet[2964]: I0120 01:18:59.384043 2964 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:18:59.781094 kubelet[2964]: E0120 01:18:59.781049 2964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:18:59.782118 kubelet[2964]: I0120 01:18:59.782094 2964 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:18:59.785740 kubelet[2964]: I0120 01:18:59.785727 2964 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:18:59.788414 kubelet[2964]: I0120 01:18:59.788376 2964 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:18:59.788599 kubelet[2964]: I0120 01:18:59.788567 2964 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:18:59.788703 kubelet[2964]: I0120 01:18:59.788594 2964 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-d40ac89f78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:18:59.788773 kubelet[2964]: I0120 01:18:59.788704 2964 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:18:59.788773 kubelet[2964]: I0120 01:18:59.788716 2964 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:18:59.788806 kubelet[2964]: I0120 01:18:59.788801 2964 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:18:59.792027 kubelet[2964]: I0120 01:18:59.792010 2964 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:18:59.793030 kubelet[2964]: I0120 01:18:59.793014 2964 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:18:59.793064 kubelet[2964]: I0120 01:18:59.793032 2964 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:18:59.793064 kubelet[2964]: I0120 01:18:59.793055 2964 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:18:59.793064 kubelet[2964]: I0120 01:18:59.793066 2964 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:18:59.794524 kubelet[2964]: E0120 01:18:59.793747 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:18:59.794524 kubelet[2964]: I0120 01:18:59.793844 2964 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:18:59.794524 kubelet[2964]: I0120 01:18:59.794161 2964 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:18:59.794524 kubelet[2964]: I0120 01:18:59.794178 2964 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:18:59.794524 kubelet[2964]: W0120 01:18:59.794207 2964 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:18:59.798711 kubelet[2964]: I0120 01:18:59.796834 2964 server.go:1262] "Started kubelet" Jan 20 01:18:59.798711 kubelet[2964]: E0120 01:18:59.796849 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-d40ac89f78&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:18:59.798711 kubelet[2964]: I0120 01:18:59.797042 2964 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:18:59.798711 kubelet[2964]: I0120 01:18:59.797360 2964 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:18:59.798711 kubelet[2964]: I0120 01:18:59.797593 2964 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:18:59.799814 kubelet[2964]: I0120 01:18:59.799776 2964 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:18:59.799907 kubelet[2964]: I0120 01:18:59.799896 2964 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:18:59.800087 kubelet[2964]: I0120 01:18:59.800075 2964 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:18:59.800564 kubelet[2964]: I0120 01:18:59.800547 2964 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:18:59.802665 kubelet[2964]: I0120 01:18:59.802646 2964 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:18:59.802817 kubelet[2964]: E0120 01:18:59.802780 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:18:59.804633 kubelet[2964]: I0120 01:18:59.804605 2964 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:18:59.804700 kubelet[2964]: I0120 01:18:59.804648 2964 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:18:59.805965 kubelet[2964]: E0120 01:18:59.804492 2964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-n-d40ac89f78.188c4ba848bec6e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-n-d40ac89f78,UID:ci-4459.2.2-n-d40ac89f78,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-n-d40ac89f78,},FirstTimestamp:2026-01-20 01:18:59.796813539 +0000 UTC m=+1.239846277,LastTimestamp:2026-01-20 01:18:59.796813539 +0000 UTC m=+1.239846277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-n-d40ac89f78,}" Jan 20 01:18:59.805965 kubelet[2964]: E0120 01:18:59.805930 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-d40ac89f78?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="200ms" Jan 20 01:18:59.806360 kubelet[2964]: I0120 01:18:59.806340 2964 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:18:59.806515 kubelet[2964]: I0120 01:18:59.806483 2964 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:18:59.808210 kubelet[2964]: E0120 01:18:59.807979 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:18:59.808376 kubelet[2964]: I0120 01:18:59.808361 2964 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:18:59.814681 kubelet[2964]: E0120 01:18:59.814663 2964 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:18:59.826668 kubelet[2964]: I0120 01:18:59.826643 2964 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:18:59.826668 kubelet[2964]: I0120 01:18:59.826658 2964 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:18:59.826797 kubelet[2964]: I0120 01:18:59.826684 2964 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:18:59.830580 kubelet[2964]: I0120 01:18:59.830558 2964 policy_none.go:49] "None policy: Start" Jan 20 01:18:59.830580 kubelet[2964]: I0120 01:18:59.830584 2964 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:18:59.830670 kubelet[2964]: I0120 01:18:59.830593 2964 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:18:59.833671 kubelet[2964]: I0120 01:18:59.833653 2964 policy_none.go:47] "Start" Jan 20 01:18:59.836966 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:18:59.842658 kubelet[2964]: I0120 01:18:59.842492 2964 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:18:59.844565 kubelet[2964]: I0120 01:18:59.844546 2964 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:18:59.844634 kubelet[2964]: I0120 01:18:59.844626 2964 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:18:59.844704 kubelet[2964]: I0120 01:18:59.844697 2964 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:18:59.844781 kubelet[2964]: E0120 01:18:59.844765 2964 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:18:59.845605 kubelet[2964]: E0120 01:18:59.845244 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:18:59.846717 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:18:59.850141 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:18:59.857118 kubelet[2964]: E0120 01:18:59.857098 2964 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:18:59.857258 kubelet[2964]: I0120 01:18:59.857241 2964 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:18:59.857305 kubelet[2964]: I0120 01:18:59.857256 2964 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:18:59.857787 kubelet[2964]: I0120 01:18:59.857707 2964 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:18:59.859075 kubelet[2964]: E0120 01:18:59.859055 2964 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:18:59.859141 kubelet[2964]: E0120 01:18:59.859085 2964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:18:59.955845 systemd[1]: Created slice kubepods-burstable-pod0d1e42be9980a603291b738a96e83e40.slice - libcontainer container kubepods-burstable-pod0d1e42be9980a603291b738a96e83e40.slice. Jan 20 01:18:59.958846 kubelet[2964]: I0120 01:18:59.958824 2964 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:18:59.959428 kubelet[2964]: E0120 01:18:59.959167 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:18:59.963087 kubelet[2964]: E0120 01:18:59.963059 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:18:59.967022 systemd[1]: Created slice kubepods-burstable-pod0fbb151bd3ab55109be13d2d89b4f007.slice - libcontainer container kubepods-burstable-pod0fbb151bd3ab55109be13d2d89b4f007.slice. Jan 20 01:18:59.978574 kubelet[2964]: E0120 01:18:59.978550 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:18:59.980895 systemd[1]: Created slice kubepods-burstable-pod3be27594cc0d510083bd5bb254028acf.slice - libcontainer container kubepods-burstable-pod3be27594cc0d510083bd5bb254028acf.slice. Jan 20 01:18:59.982251 kubelet[2964]: E0120 01:18:59.982231 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006741 kubelet[2964]: I0120 01:19:00.006528 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3be27594cc0d510083bd5bb254028acf-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-d40ac89f78\" (UID: \"3be27594cc0d510083bd5bb254028acf\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006741 kubelet[2964]: I0120 01:19:00.006552 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d1e42be9980a603291b738a96e83e40-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-d40ac89f78\" (UID: \"0d1e42be9980a603291b738a96e83e40\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006741 kubelet[2964]: I0120 01:19:00.006564 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d1e42be9980a603291b738a96e83e40-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-d40ac89f78\" (UID: \"0d1e42be9980a603291b738a96e83e40\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006741 kubelet[2964]: I0120 01:19:00.006575 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006741 kubelet[2964]: I0120 01:19:00.006585 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006875 kubelet[2964]: I0120 01:19:00.006594 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d1e42be9980a603291b738a96e83e40-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-d40ac89f78\" (UID: \"0d1e42be9980a603291b738a96e83e40\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006875 kubelet[2964]: I0120 01:19:00.006602 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006875 kubelet[2964]: I0120 01:19:00.006611 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006875 kubelet[2964]: I0120 01:19:00.006620 2964 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.006875 kubelet[2964]: E0120 01:19:00.006712 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-d40ac89f78?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="400ms" Jan 20 01:19:00.161103 kubelet[2964]: I0120 01:19:00.161008 2964 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.162081 kubelet[2964]: E0120 01:19:00.162052 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.268032 containerd[1915]: time="2026-01-20T01:19:00.267987136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-d40ac89f78,Uid:0d1e42be9980a603291b738a96e83e40,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:00.282505 containerd[1915]: time="2026-01-20T01:19:00.282475439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-d40ac89f78,Uid:0fbb151bd3ab55109be13d2d89b4f007,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:00.286258 containerd[1915]: time="2026-01-20T01:19:00.286201674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-d40ac89f78,Uid:3be27594cc0d510083bd5bb254028acf,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:00.407889 kubelet[2964]: E0120 01:19:00.407847 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-d40ac89f78?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="800ms" Jan 20 01:19:00.563846 kubelet[2964]: I0120 01:19:00.563814 2964 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.564130 kubelet[2964]: E0120 01:19:00.564104 2964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.20:6443/api/v1/nodes\": dial tcp 10.200.20.20:6443: connect: connection refused" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:00.733442 kubelet[2964]: E0120 01:19:00.733403 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:19:00.906160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321069932.mount: Deactivated successfully. Jan 20 01:19:00.924372 containerd[1915]: time="2026-01-20T01:19:00.924333908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:19:00.938469 containerd[1915]: time="2026-01-20T01:19:00.938441839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 20 01:19:00.943119 containerd[1915]: time="2026-01-20T01:19:00.943072469Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:19:00.948780 containerd[1915]: time="2026-01-20T01:19:00.948752316Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:19:00.952526 containerd[1915]: time="2026-01-20T01:19:00.952485119Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:19:00.954965 containerd[1915]: time="2026-01-20T01:19:00.954942923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:19:00.956709 containerd[1915]: time="2026-01-20T01:19:00.956692985Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:19:00.965397 containerd[1915]: time="2026-01-20T01:19:00.965371276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:19:00.966829 containerd[1915]: time="2026-01-20T01:19:00.966292089Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 680.415473ms" Jan 20 01:19:00.966913 containerd[1915]: time="2026-01-20T01:19:00.966890011Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 695.612045ms" Jan 20 01:19:00.986323 containerd[1915]: time="2026-01-20T01:19:00.986285480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 694.892167ms" Jan 20 01:19:01.011195 kubelet[2964]: E0120 01:19:01.011105 2964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.20:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-n-d40ac89f78.188c4ba848bec6e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-n-d40ac89f78,UID:ci-4459.2.2-n-d40ac89f78,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-n-d40ac89f78,},FirstTimestamp:2026-01-20 01:18:59.796813539 +0000 UTC m=+1.239846277,LastTimestamp:2026-01-20 01:18:59.796813539 +0000 UTC m=+1.239846277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-n-d40ac89f78,}" Jan 20 01:19:01.024654 containerd[1915]: time="2026-01-20T01:19:01.024614725Z" level=info msg="connecting to shim 8e83ef73550ae430d70e0191e41c3fb90d9d3753f59509f753a3153d15177776" address="unix:///run/containerd/s/f3b98b86435743ad4d14e133ce395175fc8b4c1feb9a9ffbc5c4d6b7a3bb587f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:01.036104 containerd[1915]: time="2026-01-20T01:19:01.036032660Z" level=info msg="connecting to shim bbb79ff8f7226245d0b4d5127f4a2449742fafe080ee29351d6a5cee51c4b5a8" address="unix:///run/containerd/s/fed15a96afcdd41e27950a7d8c7f35d19d5dc9c2fd8cefa8983117139d9fe417" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:01.050521 containerd[1915]: time="2026-01-20T01:19:01.050198369Z" level=info msg="connecting to shim 0c1e2cca0ddb4b71a0b02c169ad02d85db0fe406cf30df869064b2fa14ded3c9" address="unix:///run/containerd/s/b0cb19be9a9e0d37c3eea6af3a0a63b5c333fad31bfa9fb037f99e2d03a67c7d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:01.051743 systemd[1]: Started cri-containerd-8e83ef73550ae430d70e0191e41c3fb90d9d3753f59509f753a3153d15177776.scope - libcontainer container 8e83ef73550ae430d70e0191e41c3fb90d9d3753f59509f753a3153d15177776. Jan 20 01:19:01.062756 systemd[1]: Started cri-containerd-bbb79ff8f7226245d0b4d5127f4a2449742fafe080ee29351d6a5cee51c4b5a8.scope - libcontainer container bbb79ff8f7226245d0b4d5127f4a2449742fafe080ee29351d6a5cee51c4b5a8. Jan 20 01:19:01.077573 systemd[1]: Started cri-containerd-0c1e2cca0ddb4b71a0b02c169ad02d85db0fe406cf30df869064b2fa14ded3c9.scope - libcontainer container 0c1e2cca0ddb4b71a0b02c169ad02d85db0fe406cf30df869064b2fa14ded3c9. Jan 20 01:19:01.100514 containerd[1915]: time="2026-01-20T01:19:01.100085585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-n-d40ac89f78,Uid:0d1e42be9980a603291b738a96e83e40,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e83ef73550ae430d70e0191e41c3fb90d9d3753f59509f753a3153d15177776\"" Jan 20 01:19:01.111058 containerd[1915]: time="2026-01-20T01:19:01.111022290Z" level=info msg="CreateContainer within sandbox \"8e83ef73550ae430d70e0191e41c3fb90d9d3753f59509f753a3153d15177776\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:19:01.120246 containerd[1915]: time="2026-01-20T01:19:01.120214301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-n-d40ac89f78,Uid:0fbb151bd3ab55109be13d2d89b4f007,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbb79ff8f7226245d0b4d5127f4a2449742fafe080ee29351d6a5cee51c4b5a8\"" Jan 20 01:19:01.127377 containerd[1915]: time="2026-01-20T01:19:01.127344360Z" level=info msg="Container e7594eafcf4fd5d2362164ea6200d89668378bd2c5a32b1fea5fe2e8bc655345: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:01.127891 containerd[1915]: time="2026-01-20T01:19:01.127867696Z" level=info msg="CreateContainer within sandbox \"bbb79ff8f7226245d0b4d5127f4a2449742fafe080ee29351d6a5cee51c4b5a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:19:01.129805 containerd[1915]: time="2026-01-20T01:19:01.129738546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-n-d40ac89f78,Uid:3be27594cc0d510083bd5bb254028acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c1e2cca0ddb4b71a0b02c169ad02d85db0fe406cf30df869064b2fa14ded3c9\"" Jan 20 01:19:01.135774 containerd[1915]: time="2026-01-20T01:19:01.135673393Z" level=info msg="CreateContainer within sandbox \"0c1e2cca0ddb4b71a0b02c169ad02d85db0fe406cf30df869064b2fa14ded3c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:19:01.150976 containerd[1915]: time="2026-01-20T01:19:01.150948511Z" level=info msg="Container 8e2ca0d2e457930287dadd0aab55b53ab1c5d370359b849a70b0c303d6a7e491: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:01.161952 containerd[1915]: time="2026-01-20T01:19:01.161863191Z" level=info msg="Container e56107710a6ca5e4b5f1a258aa94481035d02309e50f99a79f82215195bd8001: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:01.162736 containerd[1915]: time="2026-01-20T01:19:01.162048389Z" level=info msg="CreateContainer within sandbox \"8e83ef73550ae430d70e0191e41c3fb90d9d3753f59509f753a3153d15177776\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e7594eafcf4fd5d2362164ea6200d89668378bd2c5a32b1fea5fe2e8bc655345\"" Jan 20 01:19:01.163424 containerd[1915]: time="2026-01-20T01:19:01.163392238Z" level=info msg="StartContainer for \"e7594eafcf4fd5d2362164ea6200d89668378bd2c5a32b1fea5fe2e8bc655345\"" Jan 20 01:19:01.166018 containerd[1915]: time="2026-01-20T01:19:01.165774496Z" level=info msg="connecting to shim e7594eafcf4fd5d2362164ea6200d89668378bd2c5a32b1fea5fe2e8bc655345" address="unix:///run/containerd/s/f3b98b86435743ad4d14e133ce395175fc8b4c1feb9a9ffbc5c4d6b7a3bb587f" protocol=ttrpc version=3 Jan 20 01:19:01.167598 kubelet[2964]: E0120 01:19:01.167566 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-n-d40ac89f78&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:19:01.177059 containerd[1915]: time="2026-01-20T01:19:01.176991569Z" level=info msg="CreateContainer within sandbox \"bbb79ff8f7226245d0b4d5127f4a2449742fafe080ee29351d6a5cee51c4b5a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e2ca0d2e457930287dadd0aab55b53ab1c5d370359b849a70b0c303d6a7e491\"" Jan 20 01:19:01.177633 containerd[1915]: time="2026-01-20T01:19:01.177613820Z" level=info msg="StartContainer for \"8e2ca0d2e457930287dadd0aab55b53ab1c5d370359b849a70b0c303d6a7e491\"" Jan 20 01:19:01.178696 containerd[1915]: time="2026-01-20T01:19:01.178677101Z" level=info msg="connecting to shim 8e2ca0d2e457930287dadd0aab55b53ab1c5d370359b849a70b0c303d6a7e491" address="unix:///run/containerd/s/fed15a96afcdd41e27950a7d8c7f35d19d5dc9c2fd8cefa8983117139d9fe417" protocol=ttrpc version=3 Jan 20 01:19:01.181696 systemd[1]: Started cri-containerd-e7594eafcf4fd5d2362164ea6200d89668378bd2c5a32b1fea5fe2e8bc655345.scope - libcontainer container e7594eafcf4fd5d2362164ea6200d89668378bd2c5a32b1fea5fe2e8bc655345. Jan 20 01:19:01.186727 kubelet[2964]: E0120 01:19:01.186611 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:19:01.188257 containerd[1915]: time="2026-01-20T01:19:01.188227547Z" level=info msg="CreateContainer within sandbox \"0c1e2cca0ddb4b71a0b02c169ad02d85db0fe406cf30df869064b2fa14ded3c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e56107710a6ca5e4b5f1a258aa94481035d02309e50f99a79f82215195bd8001\"" Jan 20 01:19:01.191870 containerd[1915]: time="2026-01-20T01:19:01.191841634Z" level=info msg="StartContainer for \"e56107710a6ca5e4b5f1a258aa94481035d02309e50f99a79f82215195bd8001\"" Jan 20 01:19:01.193480 containerd[1915]: time="2026-01-20T01:19:01.193457980Z" level=info msg="connecting to shim e56107710a6ca5e4b5f1a258aa94481035d02309e50f99a79f82215195bd8001" address="unix:///run/containerd/s/b0cb19be9a9e0d37c3eea6af3a0a63b5c333fad31bfa9fb037f99e2d03a67c7d" protocol=ttrpc version=3 Jan 20 01:19:01.205922 systemd[1]: Started cri-containerd-8e2ca0d2e457930287dadd0aab55b53ab1c5d370359b849a70b0c303d6a7e491.scope - libcontainer container 8e2ca0d2e457930287dadd0aab55b53ab1c5d370359b849a70b0c303d6a7e491. Jan 20 01:19:01.209503 kubelet[2964]: E0120 01:19:01.209469 2964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-n-d40ac89f78?timeout=10s\": dial tcp 10.200.20.20:6443: connect: connection refused" interval="1.6s" Jan 20 01:19:01.218964 systemd[1]: Started cri-containerd-e56107710a6ca5e4b5f1a258aa94481035d02309e50f99a79f82215195bd8001.scope - libcontainer container e56107710a6ca5e4b5f1a258aa94481035d02309e50f99a79f82215195bd8001. Jan 20 01:19:01.233646 containerd[1915]: time="2026-01-20T01:19:01.233615649Z" level=info msg="StartContainer for \"e7594eafcf4fd5d2362164ea6200d89668378bd2c5a32b1fea5fe2e8bc655345\" returns successfully" Jan 20 01:19:01.262531 kubelet[2964]: E0120 01:19:01.262478 2964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:19:01.274452 containerd[1915]: time="2026-01-20T01:19:01.274392289Z" level=info msg="StartContainer for \"8e2ca0d2e457930287dadd0aab55b53ab1c5d370359b849a70b0c303d6a7e491\" returns successfully" Jan 20 01:19:01.282040 containerd[1915]: time="2026-01-20T01:19:01.281862943Z" level=info msg="StartContainer for \"e56107710a6ca5e4b5f1a258aa94481035d02309e50f99a79f82215195bd8001\" returns successfully" Jan 20 01:19:01.366870 kubelet[2964]: I0120 01:19:01.366836 2964 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:01.857093 kubelet[2964]: E0120 01:19:01.856564 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:01.860738 kubelet[2964]: E0120 01:19:01.860673 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:01.862304 kubelet[2964]: E0120 01:19:01.862237 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:02.620306 kubelet[2964]: I0120 01:19:02.620141 2964 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:02.620306 kubelet[2964]: E0120 01:19:02.620176 2964 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-n-d40ac89f78\": node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:02.771861 kubelet[2964]: E0120 01:19:02.771058 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:02.866486 kubelet[2964]: E0120 01:19:02.865908 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:02.866932 kubelet[2964]: E0120 01:19:02.866812 2964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:02.873191 kubelet[2964]: E0120 01:19:02.872981 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:02.973811 kubelet[2964]: E0120 01:19:02.973774 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:03.074873 kubelet[2964]: E0120 01:19:03.074843 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:03.175577 kubelet[2964]: E0120 01:19:03.175332 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:03.275904 kubelet[2964]: E0120 01:19:03.275868 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:03.377350 kubelet[2964]: E0120 01:19:03.376754 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:03.477043 kubelet[2964]: E0120 01:19:03.477007 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:03.577796 kubelet[2964]: E0120 01:19:03.577760 2964 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:03.704024 kubelet[2964]: I0120 01:19:03.703987 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:03.722340 kubelet[2964]: I0120 01:19:03.722312 2964 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:19:03.722476 kubelet[2964]: I0120 01:19:03.722431 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:03.767980 kubelet[2964]: I0120 01:19:03.767698 2964 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:19:03.767980 kubelet[2964]: I0120 01:19:03.767786 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:03.794762 kubelet[2964]: I0120 01:19:03.794749 2964 apiserver.go:52] "Watching apiserver" Jan 20 01:19:03.805223 kubelet[2964]: I0120 01:19:03.805186 2964 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 01:19:03.811583 kubelet[2964]: I0120 01:19:03.811493 2964 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:19:03.865210 kubelet[2964]: I0120 01:19:03.864993 2964 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:03.875649 kubelet[2964]: I0120 01:19:03.875630 2964 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:19:03.875810 kubelet[2964]: E0120 01:19:03.875771 2964 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-d40ac89f78\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:03.889085 update_engine[1902]: I20260120 01:19:03.888643 1902 update_attempter.cc:509] Updating boot flags... Jan 20 01:19:05.110281 systemd[1]: Reload requested from client PID 3359 ('systemctl') (unit session-9.scope)... Jan 20 01:19:05.110295 systemd[1]: Reloading... Jan 20 01:19:05.188527 zram_generator::config[3406]: No configuration found. Jan 20 01:19:05.352903 systemd[1]: Reloading finished in 242 ms. Jan 20 01:19:05.376057 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:19:05.390288 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:19:05.390487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:05.390550 systemd[1]: kubelet.service: Consumed 961ms CPU time, 121.7M memory peak. Jan 20 01:19:05.393683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:19:05.540170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:19:05.548724 (kubelet)[3470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:19:05.587693 kubelet[3470]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:19:05.588563 kubelet[3470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:19:05.588563 kubelet[3470]: I0120 01:19:05.587955 3470 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:19:05.594548 kubelet[3470]: I0120 01:19:05.594521 3470 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:19:05.594657 kubelet[3470]: I0120 01:19:05.594648 3470 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:19:05.594720 kubelet[3470]: I0120 01:19:05.594714 3470 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:19:05.594763 kubelet[3470]: I0120 01:19:05.594755 3470 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:19:05.594976 kubelet[3470]: I0120 01:19:05.594960 3470 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:19:05.595905 kubelet[3470]: I0120 01:19:05.595883 3470 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 01:19:05.597678 kubelet[3470]: I0120 01:19:05.597656 3470 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:19:05.604188 kubelet[3470]: I0120 01:19:05.604169 3470 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:19:05.608331 kubelet[3470]: I0120 01:19:05.608310 3470 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:19:05.608627 kubelet[3470]: I0120 01:19:05.608597 3470 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:19:05.608726 kubelet[3470]: I0120 01:19:05.608624 3470 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-n-d40ac89f78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:19:05.608786 kubelet[3470]: I0120 01:19:05.608727 3470 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:19:05.608786 kubelet[3470]: I0120 01:19:05.608733 3470 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:19:05.608786 kubelet[3470]: I0120 01:19:05.608752 3470 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:19:05.609363 kubelet[3470]: I0120 01:19:05.609347 3470 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:19:05.609471 kubelet[3470]: I0120 01:19:05.609456 3470 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:19:05.609571 kubelet[3470]: I0120 01:19:05.609472 3470 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:19:05.609571 kubelet[3470]: I0120 01:19:05.609492 3470 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:19:05.609571 kubelet[3470]: I0120 01:19:05.609525 3470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:19:05.611820 kubelet[3470]: I0120 01:19:05.610823 3470 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 01:19:05.611820 kubelet[3470]: I0120 01:19:05.611178 3470 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:19:05.611820 kubelet[3470]: I0120 01:19:05.611197 3470 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:19:05.613175 kubelet[3470]: I0120 01:19:05.613160 3470 server.go:1262] "Started kubelet" Jan 20 01:19:05.614096 kubelet[3470]: I0120 01:19:05.614080 3470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:19:05.624678 kubelet[3470]: I0120 01:19:05.624637 3470 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:19:05.625098 kubelet[3470]: I0120 01:19:05.625067 3470 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:19:05.625154 kubelet[3470]: I0120 01:19:05.625107 3470 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:19:05.625303 kubelet[3470]: I0120 01:19:05.625287 3470 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:19:05.627158 kubelet[3470]: I0120 01:19:05.626229 3470 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:19:05.627661 kubelet[3470]: I0120 01:19:05.627641 3470 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:19:05.628465 kubelet[3470]: E0120 01:19:05.627893 3470 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-n-d40ac89f78\" not found" Jan 20 01:19:05.628593 kubelet[3470]: I0120 01:19:05.628185 3470 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:19:05.634898 kubelet[3470]: I0120 01:19:05.634869 3470 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:19:05.634978 kubelet[3470]: I0120 01:19:05.634959 3470 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:19:05.636312 kubelet[3470]: I0120 01:19:05.635881 3470 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:19:05.636312 kubelet[3470]: I0120 01:19:05.636003 3470 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:19:05.642821 kubelet[3470]: I0120 01:19:05.642792 3470 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:19:05.644610 kubelet[3470]: I0120 01:19:05.644582 3470 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:19:05.658877 kubelet[3470]: I0120 01:19:05.658610 3470 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:19:05.658877 kubelet[3470]: I0120 01:19:05.658629 3470 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:19:05.658877 kubelet[3470]: I0120 01:19:05.658648 3470 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:19:05.658877 kubelet[3470]: E0120 01:19:05.658683 3470 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:19:05.690685 kubelet[3470]: I0120 01:19:05.690664 3470 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:19:05.690828 kubelet[3470]: I0120 01:19:05.690814 3470 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:19:05.690869 kubelet[3470]: I0120 01:19:05.690864 3470 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:19:05.691024 kubelet[3470]: I0120 01:19:05.691009 3470 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:19:05.691109 kubelet[3470]: I0120 01:19:05.691091 3470 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:19:05.691154 kubelet[3470]: I0120 01:19:05.691148 3470 policy_none.go:49] "None policy: Start" Jan 20 01:19:05.691190 kubelet[3470]: I0120 01:19:05.691184 3470 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:19:05.691241 kubelet[3470]: I0120 01:19:05.691231 3470 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:19:05.691375 kubelet[3470]: I0120 01:19:05.691360 3470 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 01:19:05.691430 kubelet[3470]: I0120 01:19:05.691424 3470 policy_none.go:47] "Start" Jan 20 01:19:05.696450 kubelet[3470]: E0120 01:19:05.696407 3470 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:19:05.697547 kubelet[3470]: I0120 01:19:05.697138 3470 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:19:05.697547 kubelet[3470]: I0120 01:19:05.697154 3470 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:19:05.697547 kubelet[3470]: I0120 01:19:05.697458 3470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:19:05.698676 kubelet[3470]: E0120 01:19:05.698614 3470 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:19:05.759766 kubelet[3470]: I0120 01:19:05.759734 3470 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.760049 kubelet[3470]: I0120 01:19:05.759954 3470 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.760423 kubelet[3470]: I0120 01:19:05.760404 3470 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.774552 kubelet[3470]: I0120 01:19:05.774511 3470 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:19:05.774619 kubelet[3470]: E0120 01:19:05.774566 3470 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-d40ac89f78\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.775445 kubelet[3470]: I0120 01:19:05.775426 3470 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:19:05.775514 kubelet[3470]: E0120 01:19:05.775458 3470 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.775821 kubelet[3470]: I0120 01:19:05.775718 3470 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:19:05.775821 kubelet[3470]: E0120 01:19:05.775751 3470 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-d40ac89f78\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.800444 kubelet[3470]: I0120 01:19:05.800421 3470 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.812165 kubelet[3470]: I0120 01:19:05.811937 3470 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.812165 kubelet[3470]: I0120 01:19:05.812004 3470 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.837661 kubelet[3470]: I0120 01:19:05.837639 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d1e42be9980a603291b738a96e83e40-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-n-d40ac89f78\" (UID: \"0d1e42be9980a603291b738a96e83e40\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.837795 kubelet[3470]: I0120 01:19:05.837783 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d1e42be9980a603291b738a96e83e40-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-n-d40ac89f78\" (UID: \"0d1e42be9980a603291b738a96e83e40\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.837923 kubelet[3470]: I0120 01:19:05.837871 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d1e42be9980a603291b738a96e83e40-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-n-d40ac89f78\" (UID: \"0d1e42be9980a603291b738a96e83e40\") " pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.837923 kubelet[3470]: I0120 01:19:05.837885 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.837923 kubelet[3470]: I0120 01:19:05.837898 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.838026 kubelet[3470]: I0120 01:19:05.837908 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.838142 kubelet[3470]: I0120 01:19:05.838099 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.838142 kubelet[3470]: I0120 01:19:05.838114 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3be27594cc0d510083bd5bb254028acf-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-n-d40ac89f78\" (UID: \"3be27594cc0d510083bd5bb254028acf\") " pod="kube-system/kube-scheduler-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:05.838142 kubelet[3470]: I0120 01:19:05.838124 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fbb151bd3ab55109be13d2d89b4f007-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-n-d40ac89f78\" (UID: \"0fbb151bd3ab55109be13d2d89b4f007\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:06.612332 kubelet[3470]: I0120 01:19:06.611075 3470 apiserver.go:52] "Watching apiserver" Jan 20 01:19:06.636642 kubelet[3470]: I0120 01:19:06.636622 3470 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 01:19:06.680062 kubelet[3470]: I0120 01:19:06.680025 3470 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:06.680330 kubelet[3470]: I0120 01:19:06.680312 3470 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:06.725675 kubelet[3470]: I0120 01:19:06.725632 3470 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:19:06.726144 kubelet[3470]: E0120 01:19:06.725822 3470 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-n-d40ac89f78\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:06.726144 kubelet[3470]: I0120 01:19:06.726014 3470 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 01:19:06.726144 kubelet[3470]: E0120 01:19:06.726046 3470 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-n-d40ac89f78\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:06.744515 kubelet[3470]: I0120 01:19:06.744390 3470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-n-d40ac89f78" podStartSLOduration=3.744378219 podStartE2EDuration="3.744378219s" podCreationTimestamp="2026-01-20 01:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:06.735189268 +0000 UTC m=+1.183502941" watchObservedRunningTime="2026-01-20 01:19:06.744378219 +0000 UTC m=+1.192691788" Jan 20 01:19:06.755325 kubelet[3470]: I0120 01:19:06.755280 3470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-n-d40ac89f78" podStartSLOduration=3.755270896 podStartE2EDuration="3.755270896s" podCreationTimestamp="2026-01-20 01:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:06.744875776 +0000 UTC m=+1.193189345" watchObservedRunningTime="2026-01-20 01:19:06.755270896 +0000 UTC m=+1.203584465" Jan 20 01:19:06.755579 kubelet[3470]: I0120 01:19:06.755555 3470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-n-d40ac89f78" podStartSLOduration=3.7555456400000002 podStartE2EDuration="3.75554564s" podCreationTimestamp="2026-01-20 01:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:06.755493119 +0000 UTC m=+1.203806688" watchObservedRunningTime="2026-01-20 01:19:06.75554564 +0000 UTC m=+1.203859209" Jan 20 01:19:09.626290 kubelet[3470]: I0120 01:19:09.626257 3470 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:19:09.626757 containerd[1915]: time="2026-01-20T01:19:09.626627814Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:19:09.626892 kubelet[3470]: I0120 01:19:09.626777 3470 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:19:10.786944 systemd[1]: Created slice kubepods-besteffort-pode0b990b9_0109_4ee1_9d18_cd1830b0e199.slice - libcontainer container kubepods-besteffort-pode0b990b9_0109_4ee1_9d18_cd1830b0e199.slice. Jan 20 01:19:10.865519 kubelet[3470]: I0120 01:19:10.865311 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0b990b9-0109-4ee1-9d18-cd1830b0e199-xtables-lock\") pod \"kube-proxy-hgv5v\" (UID: \"e0b990b9-0109-4ee1-9d18-cd1830b0e199\") " pod="kube-system/kube-proxy-hgv5v" Jan 20 01:19:10.865519 kubelet[3470]: I0120 01:19:10.865339 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0b990b9-0109-4ee1-9d18-cd1830b0e199-lib-modules\") pod \"kube-proxy-hgv5v\" (UID: \"e0b990b9-0109-4ee1-9d18-cd1830b0e199\") " pod="kube-system/kube-proxy-hgv5v" Jan 20 01:19:10.865519 kubelet[3470]: I0120 01:19:10.865355 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65sg\" (UniqueName: \"kubernetes.io/projected/e0b990b9-0109-4ee1-9d18-cd1830b0e199-kube-api-access-v65sg\") pod \"kube-proxy-hgv5v\" (UID: \"e0b990b9-0109-4ee1-9d18-cd1830b0e199\") " pod="kube-system/kube-proxy-hgv5v" Jan 20 01:19:10.865519 kubelet[3470]: I0120 01:19:10.865369 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0b990b9-0109-4ee1-9d18-cd1830b0e199-kube-proxy\") pod \"kube-proxy-hgv5v\" (UID: \"e0b990b9-0109-4ee1-9d18-cd1830b0e199\") " pod="kube-system/kube-proxy-hgv5v" Jan 20 01:19:10.917437 systemd[1]: Created slice kubepods-besteffort-pod9d845303_7038_4ce3_8d25_89b8e679a95e.slice - libcontainer container kubepods-besteffort-pod9d845303_7038_4ce3_8d25_89b8e679a95e.slice. Jan 20 01:19:10.966271 kubelet[3470]: I0120 01:19:10.966238 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w7w7\" (UniqueName: \"kubernetes.io/projected/9d845303-7038-4ce3-8d25-89b8e679a95e-kube-api-access-5w7w7\") pod \"tigera-operator-65cdcdfd6d-wmg2b\" (UID: \"9d845303-7038-4ce3-8d25-89b8e679a95e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wmg2b" Jan 20 01:19:10.966271 kubelet[3470]: I0120 01:19:10.966277 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d845303-7038-4ce3-8d25-89b8e679a95e-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-wmg2b\" (UID: \"9d845303-7038-4ce3-8d25-89b8e679a95e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wmg2b" Jan 20 01:19:11.101529 containerd[1915]: time="2026-01-20T01:19:11.101421525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hgv5v,Uid:e0b990b9-0109-4ee1-9d18-cd1830b0e199,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:11.135033 containerd[1915]: time="2026-01-20T01:19:11.134995615Z" level=info msg="connecting to shim f7a8fb95b7e05aa6bedf8cef49251aaf6f610337c5d011639c7a1126244c1774" address="unix:///run/containerd/s/66a192421516392c120ee4843b09d7c5bad331439995936c348666b5c215e72e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:11.157639 systemd[1]: Started cri-containerd-f7a8fb95b7e05aa6bedf8cef49251aaf6f610337c5d011639c7a1126244c1774.scope - libcontainer container f7a8fb95b7e05aa6bedf8cef49251aaf6f610337c5d011639c7a1126244c1774. Jan 20 01:19:11.176805 containerd[1915]: time="2026-01-20T01:19:11.176766068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hgv5v,Uid:e0b990b9-0109-4ee1-9d18-cd1830b0e199,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7a8fb95b7e05aa6bedf8cef49251aaf6f610337c5d011639c7a1126244c1774\"" Jan 20 01:19:11.184418 containerd[1915]: time="2026-01-20T01:19:11.184365670Z" level=info msg="CreateContainer within sandbox \"f7a8fb95b7e05aa6bedf8cef49251aaf6f610337c5d011639c7a1126244c1774\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:19:11.201126 containerd[1915]: time="2026-01-20T01:19:11.200685946Z" level=info msg="Container 47122023d11064a55dc7124fdfbc651bb92e0573a83b578bf8cf2f991f07fcb1: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:11.219027 containerd[1915]: time="2026-01-20T01:19:11.218989165Z" level=info msg="CreateContainer within sandbox \"f7a8fb95b7e05aa6bedf8cef49251aaf6f610337c5d011639c7a1126244c1774\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"47122023d11064a55dc7124fdfbc651bb92e0573a83b578bf8cf2f991f07fcb1\"" Jan 20 01:19:11.219696 containerd[1915]: time="2026-01-20T01:19:11.219673072Z" level=info msg="StartContainer for \"47122023d11064a55dc7124fdfbc651bb92e0573a83b578bf8cf2f991f07fcb1\"" Jan 20 01:19:11.220943 containerd[1915]: time="2026-01-20T01:19:11.220820000Z" level=info msg="connecting to shim 47122023d11064a55dc7124fdfbc651bb92e0573a83b578bf8cf2f991f07fcb1" address="unix:///run/containerd/s/66a192421516392c120ee4843b09d7c5bad331439995936c348666b5c215e72e" protocol=ttrpc version=3 Jan 20 01:19:11.224490 containerd[1915]: time="2026-01-20T01:19:11.224389483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wmg2b,Uid:9d845303-7038-4ce3-8d25-89b8e679a95e,Namespace:tigera-operator,Attempt:0,}" Jan 20 01:19:11.241652 systemd[1]: Started cri-containerd-47122023d11064a55dc7124fdfbc651bb92e0573a83b578bf8cf2f991f07fcb1.scope - libcontainer container 47122023d11064a55dc7124fdfbc651bb92e0573a83b578bf8cf2f991f07fcb1. Jan 20 01:19:11.252909 containerd[1915]: time="2026-01-20T01:19:11.252874056Z" level=info msg="connecting to shim 961a2982eb6ae9a56d0f5f79608287937fa0ab22d934689b318bbe3686b452c4" address="unix:///run/containerd/s/b84cb360f3adf253613f61dccb209ccfba95e3b2c06eed5d6bb44834e823a2ae" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:11.273704 systemd[1]: Started cri-containerd-961a2982eb6ae9a56d0f5f79608287937fa0ab22d934689b318bbe3686b452c4.scope - libcontainer container 961a2982eb6ae9a56d0f5f79608287937fa0ab22d934689b318bbe3686b452c4. Jan 20 01:19:11.290337 containerd[1915]: time="2026-01-20T01:19:11.290132048Z" level=info msg="StartContainer for \"47122023d11064a55dc7124fdfbc651bb92e0573a83b578bf8cf2f991f07fcb1\" returns successfully" Jan 20 01:19:11.309689 containerd[1915]: time="2026-01-20T01:19:11.309626676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wmg2b,Uid:9d845303-7038-4ce3-8d25-89b8e679a95e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"961a2982eb6ae9a56d0f5f79608287937fa0ab22d934689b318bbe3686b452c4\"" Jan 20 01:19:11.312536 containerd[1915]: time="2026-01-20T01:19:11.312099169Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 01:19:11.700668 kubelet[3470]: I0120 01:19:11.700511 3470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hgv5v" podStartSLOduration=1.700487104 podStartE2EDuration="1.700487104s" podCreationTimestamp="2026-01-20 01:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:11.700120366 +0000 UTC m=+6.148433935" watchObservedRunningTime="2026-01-20 01:19:11.700487104 +0000 UTC m=+6.148800673" Jan 20 01:19:11.980208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295897207.mount: Deactivated successfully. Jan 20 01:19:13.348739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241971068.mount: Deactivated successfully. Jan 20 01:19:13.793390 containerd[1915]: time="2026-01-20T01:19:13.793340863Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:13.795524 containerd[1915]: time="2026-01-20T01:19:13.795402001Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 20 01:19:13.797752 containerd[1915]: time="2026-01-20T01:19:13.797712577Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:13.801578 containerd[1915]: time="2026-01-20T01:19:13.801541771Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:13.802023 containerd[1915]: time="2026-01-20T01:19:13.801827795Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.489701193s" Jan 20 01:19:13.802023 containerd[1915]: time="2026-01-20T01:19:13.801854891Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 20 01:19:13.807978 containerd[1915]: time="2026-01-20T01:19:13.807824401Z" level=info msg="CreateContainer within sandbox \"961a2982eb6ae9a56d0f5f79608287937fa0ab22d934689b318bbe3686b452c4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 01:19:13.824240 containerd[1915]: time="2026-01-20T01:19:13.823915390Z" level=info msg="Container 342d5f75ae6f36bb6f09ddbadc2492afeb6a64c58460965fa837258bca935ace: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:13.834645 containerd[1915]: time="2026-01-20T01:19:13.834570278Z" level=info msg="CreateContainer within sandbox \"961a2982eb6ae9a56d0f5f79608287937fa0ab22d934689b318bbe3686b452c4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"342d5f75ae6f36bb6f09ddbadc2492afeb6a64c58460965fa837258bca935ace\"" Jan 20 01:19:13.836515 containerd[1915]: time="2026-01-20T01:19:13.835783087Z" level=info msg="StartContainer for \"342d5f75ae6f36bb6f09ddbadc2492afeb6a64c58460965fa837258bca935ace\"" Jan 20 01:19:13.837199 containerd[1915]: time="2026-01-20T01:19:13.837172966Z" level=info msg="connecting to shim 342d5f75ae6f36bb6f09ddbadc2492afeb6a64c58460965fa837258bca935ace" address="unix:///run/containerd/s/b84cb360f3adf253613f61dccb209ccfba95e3b2c06eed5d6bb44834e823a2ae" protocol=ttrpc version=3 Jan 20 01:19:13.853601 systemd[1]: Started cri-containerd-342d5f75ae6f36bb6f09ddbadc2492afeb6a64c58460965fa837258bca935ace.scope - libcontainer container 342d5f75ae6f36bb6f09ddbadc2492afeb6a64c58460965fa837258bca935ace. Jan 20 01:19:13.875178 containerd[1915]: time="2026-01-20T01:19:13.875159426Z" level=info msg="StartContainer for \"342d5f75ae6f36bb6f09ddbadc2492afeb6a64c58460965fa837258bca935ace\" returns successfully" Jan 20 01:19:15.625425 kubelet[3470]: I0120 01:19:15.625326 3470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-wmg2b" podStartSLOduration=3.134372927 podStartE2EDuration="5.625309787s" podCreationTimestamp="2026-01-20 01:19:10 +0000 UTC" firstStartedPulling="2026-01-20 01:19:11.311691085 +0000 UTC m=+5.760004654" lastFinishedPulling="2026-01-20 01:19:13.802627945 +0000 UTC m=+8.250941514" observedRunningTime="2026-01-20 01:19:14.715805022 +0000 UTC m=+9.164118639" watchObservedRunningTime="2026-01-20 01:19:15.625309787 +0000 UTC m=+10.073623356" Jan 20 01:19:18.954181 sudo[2386]: pam_unix(sudo:session): session closed for user root Jan 20 01:19:19.031402 sshd[2385]: Connection closed by 10.200.16.10 port 45442 Jan 20 01:19:19.034775 sshd-session[2382]: pam_unix(sshd:session): session closed for user core Jan 20 01:19:19.037431 systemd[1]: sshd@6-10.200.20.20:22-10.200.16.10:45442.service: Deactivated successfully. Jan 20 01:19:19.040443 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:19:19.043723 systemd[1]: session-9.scope: Consumed 4.136s CPU time, 222.8M memory peak. Jan 20 01:19:19.046248 systemd-logind[1897]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:19:19.049151 systemd-logind[1897]: Removed session 9. Jan 20 01:19:26.274183 systemd[1]: Created slice kubepods-besteffort-pod03fdd6e2_8d0f_45c7_b704_3b988958f451.slice - libcontainer container kubepods-besteffort-pod03fdd6e2_8d0f_45c7_b704_3b988958f451.slice. Jan 20 01:19:26.362288 kubelet[3470]: I0120 01:19:26.362258 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/03fdd6e2-8d0f-45c7-b704-3b988958f451-typha-certs\") pod \"calico-typha-55cdc485b-kxrzs\" (UID: \"03fdd6e2-8d0f-45c7-b704-3b988958f451\") " pod="calico-system/calico-typha-55cdc485b-kxrzs" Jan 20 01:19:26.362288 kubelet[3470]: I0120 01:19:26.362286 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drpzd\" (UniqueName: \"kubernetes.io/projected/03fdd6e2-8d0f-45c7-b704-3b988958f451-kube-api-access-drpzd\") pod \"calico-typha-55cdc485b-kxrzs\" (UID: \"03fdd6e2-8d0f-45c7-b704-3b988958f451\") " pod="calico-system/calico-typha-55cdc485b-kxrzs" Jan 20 01:19:26.362571 kubelet[3470]: I0120 01:19:26.362309 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03fdd6e2-8d0f-45c7-b704-3b988958f451-tigera-ca-bundle\") pod \"calico-typha-55cdc485b-kxrzs\" (UID: \"03fdd6e2-8d0f-45c7-b704-3b988958f451\") " pod="calico-system/calico-typha-55cdc485b-kxrzs" Jan 20 01:19:26.446595 systemd[1]: Created slice kubepods-besteffort-pod06c6ac75_0ed4_41fd_b9b1_53cd94359145.slice - libcontainer container kubepods-besteffort-pod06c6ac75_0ed4_41fd_b9b1_53cd94359145.slice. Jan 20 01:19:26.563272 kubelet[3470]: I0120 01:19:26.562898 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/06c6ac75-0ed4-41fd-b9b1-53cd94359145-policysync\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563272 kubelet[3470]: I0120 01:19:26.562934 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/06c6ac75-0ed4-41fd-b9b1-53cd94359145-cni-log-dir\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563272 kubelet[3470]: I0120 01:19:26.562944 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06c6ac75-0ed4-41fd-b9b1-53cd94359145-var-lib-calico\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563272 kubelet[3470]: I0120 01:19:26.562955 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/06c6ac75-0ed4-41fd-b9b1-53cd94359145-node-certs\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563272 kubelet[3470]: I0120 01:19:26.562965 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/06c6ac75-0ed4-41fd-b9b1-53cd94359145-cni-bin-dir\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563428 kubelet[3470]: I0120 01:19:26.562975 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spncd\" (UniqueName: \"kubernetes.io/projected/06c6ac75-0ed4-41fd-b9b1-53cd94359145-kube-api-access-spncd\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563428 kubelet[3470]: I0120 01:19:26.562989 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/06c6ac75-0ed4-41fd-b9b1-53cd94359145-var-run-calico\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563428 kubelet[3470]: I0120 01:19:26.562997 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/06c6ac75-0ed4-41fd-b9b1-53cd94359145-flexvol-driver-host\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563428 kubelet[3470]: I0120 01:19:26.563006 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06c6ac75-0ed4-41fd-b9b1-53cd94359145-lib-modules\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563428 kubelet[3470]: I0120 01:19:26.563014 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/06c6ac75-0ed4-41fd-b9b1-53cd94359145-cni-net-dir\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563527 kubelet[3470]: I0120 01:19:26.563024 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06c6ac75-0ed4-41fd-b9b1-53cd94359145-xtables-lock\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.563527 kubelet[3470]: I0120 01:19:26.563035 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06c6ac75-0ed4-41fd-b9b1-53cd94359145-tigera-ca-bundle\") pod \"calico-node-rkjqk\" (UID: \"06c6ac75-0ed4-41fd-b9b1-53cd94359145\") " pod="calico-system/calico-node-rkjqk" Jan 20 01:19:26.582461 containerd[1915]: time="2026-01-20T01:19:26.582193259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55cdc485b-kxrzs,Uid:03fdd6e2-8d0f-45c7-b704-3b988958f451,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:26.616363 containerd[1915]: time="2026-01-20T01:19:26.616313635Z" level=info msg="connecting to shim ade628f9a51b04482e60d558a311d1c44228d10f29703f73f0726c5a7e03ff94" address="unix:///run/containerd/s/442308b4ca16d4b4297021b7790a1d7e995032c0e2b477e89ba4f3d1a836063b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:26.643666 systemd[1]: Started cri-containerd-ade628f9a51b04482e60d558a311d1c44228d10f29703f73f0726c5a7e03ff94.scope - libcontainer container ade628f9a51b04482e60d558a311d1c44228d10f29703f73f0726c5a7e03ff94. Jan 20 01:19:26.664968 kubelet[3470]: E0120 01:19:26.664819 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:19:26.667733 kubelet[3470]: E0120 01:19:26.667711 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.668972 kubelet[3470]: W0120 01:19:26.668959 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.669052 kubelet[3470]: E0120 01:19:26.669043 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.674252 kubelet[3470]: E0120 01:19:26.674164 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.674252 kubelet[3470]: W0120 01:19:26.674181 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.674252 kubelet[3470]: E0120 01:19:26.674192 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.676455 kubelet[3470]: E0120 01:19:26.676434 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.676455 kubelet[3470]: W0120 01:19:26.676451 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.676546 kubelet[3470]: E0120 01:19:26.676462 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.701242 containerd[1915]: time="2026-01-20T01:19:26.701191735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55cdc485b-kxrzs,Uid:03fdd6e2-8d0f-45c7-b704-3b988958f451,Namespace:calico-system,Attempt:0,} returns sandbox id \"ade628f9a51b04482e60d558a311d1c44228d10f29703f73f0726c5a7e03ff94\"" Jan 20 01:19:26.702691 containerd[1915]: time="2026-01-20T01:19:26.702675280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 01:19:26.738631 kubelet[3470]: E0120 01:19:26.738618 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.738774 kubelet[3470]: W0120 01:19:26.738677 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.738774 kubelet[3470]: E0120 01:19:26.738690 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.738969 kubelet[3470]: E0120 01:19:26.738958 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.739098 kubelet[3470]: W0120 01:19:26.738988 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.739098 kubelet[3470]: E0120 01:19:26.739022 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.739319 kubelet[3470]: E0120 01:19:26.739263 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.739319 kubelet[3470]: W0120 01:19:26.739273 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.739319 kubelet[3470]: E0120 01:19:26.739282 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.739590 kubelet[3470]: E0120 01:19:26.739544 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.739590 kubelet[3470]: W0120 01:19:26.739555 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.739590 kubelet[3470]: E0120 01:19:26.739564 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.739918 kubelet[3470]: E0120 01:19:26.739866 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.739918 kubelet[3470]: W0120 01:19:26.739876 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.739918 kubelet[3470]: E0120 01:19:26.739886 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.740237 kubelet[3470]: E0120 01:19:26.740202 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.740237 kubelet[3470]: W0120 01:19:26.740213 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.740237 kubelet[3470]: E0120 01:19:26.740223 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.740628 kubelet[3470]: E0120 01:19:26.740574 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.740628 kubelet[3470]: W0120 01:19:26.740586 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.740628 kubelet[3470]: E0120 01:19:26.740596 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.740889 kubelet[3470]: E0120 01:19:26.740842 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.740889 kubelet[3470]: W0120 01:19:26.740852 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.740889 kubelet[3470]: E0120 01:19:26.740861 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.741194 kubelet[3470]: E0120 01:19:26.741139 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.741194 kubelet[3470]: W0120 01:19:26.741152 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.741194 kubelet[3470]: E0120 01:19:26.741162 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.741450 kubelet[3470]: E0120 01:19:26.741404 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.741450 kubelet[3470]: W0120 01:19:26.741416 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.741450 kubelet[3470]: E0120 01:19:26.741425 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.741699 kubelet[3470]: E0120 01:19:26.741688 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.741808 kubelet[3470]: W0120 01:19:26.741746 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.741808 kubelet[3470]: E0120 01:19:26.741758 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.742036 kubelet[3470]: E0120 01:19:26.741992 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.742036 kubelet[3470]: W0120 01:19:26.742003 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.742036 kubelet[3470]: E0120 01:19:26.742011 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.742307 kubelet[3470]: E0120 01:19:26.742228 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.742307 kubelet[3470]: W0120 01:19:26.742238 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.742307 kubelet[3470]: E0120 01:19:26.742247 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.742478 kubelet[3470]: E0120 01:19:26.742433 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.742478 kubelet[3470]: W0120 01:19:26.742442 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.742478 kubelet[3470]: E0120 01:19:26.742451 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.742748 kubelet[3470]: E0120 01:19:26.742696 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.742748 kubelet[3470]: W0120 01:19:26.742707 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.742748 kubelet[3470]: E0120 01:19:26.742715 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.743026 kubelet[3470]: E0120 01:19:26.742969 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.743026 kubelet[3470]: W0120 01:19:26.742979 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.743026 kubelet[3470]: E0120 01:19:26.742988 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.743273 kubelet[3470]: E0120 01:19:26.743223 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.743273 kubelet[3470]: W0120 01:19:26.743234 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.743273 kubelet[3470]: E0120 01:19:26.743243 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.743511 kubelet[3470]: E0120 01:19:26.743470 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.743511 kubelet[3470]: W0120 01:19:26.743480 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.743511 kubelet[3470]: E0120 01:19:26.743489 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.743642 kubelet[3470]: E0120 01:19:26.743626 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.743642 kubelet[3470]: W0120 01:19:26.743637 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.743685 kubelet[3470]: E0120 01:19:26.743643 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.743788 kubelet[3470]: E0120 01:19:26.743773 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.743788 kubelet[3470]: W0120 01:19:26.743783 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.743788 kubelet[3470]: E0120 01:19:26.743790 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.755843 containerd[1915]: time="2026-01-20T01:19:26.755820311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rkjqk,Uid:06c6ac75-0ed4-41fd-b9b1-53cd94359145,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:26.765289 kubelet[3470]: E0120 01:19:26.765270 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.765289 kubelet[3470]: W0120 01:19:26.765286 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.765382 kubelet[3470]: E0120 01:19:26.765296 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.765382 kubelet[3470]: I0120 01:19:26.765313 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8fa1e625-99ce-4678-80e8-ad10255fcf43-varrun\") pod \"csi-node-driver-2cs6z\" (UID: \"8fa1e625-99ce-4678-80e8-ad10255fcf43\") " pod="calico-system/csi-node-driver-2cs6z" Jan 20 01:19:26.765509 kubelet[3470]: E0120 01:19:26.765484 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.765509 kubelet[3470]: W0120 01:19:26.765506 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.765702 kubelet[3470]: E0120 01:19:26.765515 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.765702 kubelet[3470]: I0120 01:19:26.765531 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6c6\" (UniqueName: \"kubernetes.io/projected/8fa1e625-99ce-4678-80e8-ad10255fcf43-kube-api-access-zc6c6\") pod \"csi-node-driver-2cs6z\" (UID: \"8fa1e625-99ce-4678-80e8-ad10255fcf43\") " pod="calico-system/csi-node-driver-2cs6z" Jan 20 01:19:26.765792 kubelet[3470]: E0120 01:19:26.765780 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.765857 kubelet[3470]: W0120 01:19:26.765846 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.765912 kubelet[3470]: E0120 01:19:26.765901 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.766094 kubelet[3470]: E0120 01:19:26.766083 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.766274 kubelet[3470]: W0120 01:19:26.766154 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.766274 kubelet[3470]: E0120 01:19:26.766170 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.766577 kubelet[3470]: E0120 01:19:26.766564 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.766923 kubelet[3470]: W0120 01:19:26.766852 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.766923 kubelet[3470]: E0120 01:19:26.766875 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.766923 kubelet[3470]: I0120 01:19:26.766899 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8fa1e625-99ce-4678-80e8-ad10255fcf43-socket-dir\") pod \"csi-node-driver-2cs6z\" (UID: \"8fa1e625-99ce-4678-80e8-ad10255fcf43\") " pod="calico-system/csi-node-driver-2cs6z" Jan 20 01:19:26.767045 kubelet[3470]: E0120 01:19:26.767021 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.767045 kubelet[3470]: W0120 01:19:26.767035 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.767099 kubelet[3470]: E0120 01:19:26.767045 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.767164 kubelet[3470]: E0120 01:19:26.767151 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.767164 kubelet[3470]: W0120 01:19:26.767160 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.767205 kubelet[3470]: E0120 01:19:26.767167 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.767285 kubelet[3470]: E0120 01:19:26.767272 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.767285 kubelet[3470]: W0120 01:19:26.767281 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.767328 kubelet[3470]: E0120 01:19:26.767287 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.767328 kubelet[3470]: I0120 01:19:26.767302 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fa1e625-99ce-4678-80e8-ad10255fcf43-kubelet-dir\") pod \"csi-node-driver-2cs6z\" (UID: \"8fa1e625-99ce-4678-80e8-ad10255fcf43\") " pod="calico-system/csi-node-driver-2cs6z" Jan 20 01:19:26.767413 kubelet[3470]: E0120 01:19:26.767398 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.767413 kubelet[3470]: W0120 01:19:26.767408 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.767579 kubelet[3470]: E0120 01:19:26.767414 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.767579 kubelet[3470]: I0120 01:19:26.767426 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8fa1e625-99ce-4678-80e8-ad10255fcf43-registration-dir\") pod \"csi-node-driver-2cs6z\" (UID: \"8fa1e625-99ce-4678-80e8-ad10255fcf43\") " pod="calico-system/csi-node-driver-2cs6z" Jan 20 01:19:26.767838 kubelet[3470]: E0120 01:19:26.767727 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.767838 kubelet[3470]: W0120 01:19:26.767741 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.767838 kubelet[3470]: E0120 01:19:26.767760 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.768045 kubelet[3470]: E0120 01:19:26.768034 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.768228 kubelet[3470]: W0120 01:19:26.768098 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.768228 kubelet[3470]: E0120 01:19:26.768113 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.768353 kubelet[3470]: E0120 01:19:26.768341 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.768406 kubelet[3470]: W0120 01:19:26.768396 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.768454 kubelet[3470]: E0120 01:19:26.768443 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.768744 kubelet[3470]: E0120 01:19:26.768646 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.768744 kubelet[3470]: W0120 01:19:26.768656 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.768744 kubelet[3470]: E0120 01:19:26.768665 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.768891 kubelet[3470]: E0120 01:19:26.768881 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.768936 kubelet[3470]: W0120 01:19:26.768928 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.768991 kubelet[3470]: E0120 01:19:26.768980 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.769187 kubelet[3470]: E0120 01:19:26.769152 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.769187 kubelet[3470]: W0120 01:19:26.769161 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.769187 kubelet[3470]: E0120 01:19:26.769169 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.789999 containerd[1915]: time="2026-01-20T01:19:26.789801235Z" level=info msg="connecting to shim 43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3" address="unix:///run/containerd/s/5e27c42e1c0221608234c3cc6f2f919fbad2ad3751ec7a103421cb55110c76ac" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:26.812691 systemd[1]: Started cri-containerd-43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3.scope - libcontainer container 43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3. Jan 20 01:19:26.841097 containerd[1915]: time="2026-01-20T01:19:26.840325888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rkjqk,Uid:06c6ac75-0ed4-41fd-b9b1-53cd94359145,Namespace:calico-system,Attempt:0,} returns sandbox id \"43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3\"" Jan 20 01:19:26.868126 kubelet[3470]: E0120 01:19:26.868103 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.868126 kubelet[3470]: W0120 01:19:26.868121 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.868228 kubelet[3470]: E0120 01:19:26.868134 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.868289 kubelet[3470]: E0120 01:19:26.868276 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.868325 kubelet[3470]: W0120 01:19:26.868293 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.868325 kubelet[3470]: E0120 01:19:26.868301 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.868480 kubelet[3470]: E0120 01:19:26.868466 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.868480 kubelet[3470]: W0120 01:19:26.868477 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.868664 kubelet[3470]: E0120 01:19:26.868483 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.868740 kubelet[3470]: E0120 01:19:26.868727 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.868827 kubelet[3470]: W0120 01:19:26.868815 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.868879 kubelet[3470]: E0120 01:19:26.868868 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.869061 kubelet[3470]: E0120 01:19:26.869049 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.869126 kubelet[3470]: W0120 01:19:26.869116 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.869174 kubelet[3470]: E0120 01:19:26.869164 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.869393 kubelet[3470]: E0120 01:19:26.869342 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.869393 kubelet[3470]: W0120 01:19:26.869351 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.869393 kubelet[3470]: E0120 01:19:26.869359 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.869592 kubelet[3470]: E0120 01:19:26.869571 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.869592 kubelet[3470]: W0120 01:19:26.869585 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.869592 kubelet[3470]: E0120 01:19:26.869594 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.869710 kubelet[3470]: E0120 01:19:26.869695 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.869710 kubelet[3470]: W0120 01:19:26.869705 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.869710 kubelet[3470]: E0120 01:19:26.869711 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.869806 kubelet[3470]: E0120 01:19:26.869793 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.869806 kubelet[3470]: W0120 01:19:26.869798 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.869806 kubelet[3470]: E0120 01:19:26.869805 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.869956 kubelet[3470]: E0120 01:19:26.869944 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.869956 kubelet[3470]: W0120 01:19:26.869953 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.870010 kubelet[3470]: E0120 01:19:26.869959 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.870129 kubelet[3470]: E0120 01:19:26.870116 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.870129 kubelet[3470]: W0120 01:19:26.870126 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.870180 kubelet[3470]: E0120 01:19:26.870133 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.870316 kubelet[3470]: E0120 01:19:26.870302 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.870316 kubelet[3470]: W0120 01:19:26.870312 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.870371 kubelet[3470]: E0120 01:19:26.870319 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.870488 kubelet[3470]: E0120 01:19:26.870476 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.870488 kubelet[3470]: W0120 01:19:26.870485 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.870552 kubelet[3470]: E0120 01:19:26.870492 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.870638 kubelet[3470]: E0120 01:19:26.870623 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.870638 kubelet[3470]: W0120 01:19:26.870633 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.870793 kubelet[3470]: E0120 01:19:26.870639 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.870863 kubelet[3470]: E0120 01:19:26.870852 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.870909 kubelet[3470]: W0120 01:19:26.870900 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.870953 kubelet[3470]: E0120 01:19:26.870943 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.871128 kubelet[3470]: E0120 01:19:26.871118 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.871219 kubelet[3470]: W0120 01:19:26.871188 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.871271 kubelet[3470]: E0120 01:19:26.871260 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.871535 kubelet[3470]: E0120 01:19:26.871437 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.871535 kubelet[3470]: W0120 01:19:26.871445 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.871535 kubelet[3470]: E0120 01:19:26.871453 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.871669 kubelet[3470]: E0120 01:19:26.871660 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.871725 kubelet[3470]: W0120 01:19:26.871715 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.871771 kubelet[3470]: E0120 01:19:26.871761 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.871948 kubelet[3470]: E0120 01:19:26.871938 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.872108 kubelet[3470]: W0120 01:19:26.872003 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.872108 kubelet[3470]: E0120 01:19:26.872017 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.872225 kubelet[3470]: E0120 01:19:26.872215 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.872454 kubelet[3470]: W0120 01:19:26.872269 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.872454 kubelet[3470]: E0120 01:19:26.872283 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.872592 kubelet[3470]: E0120 01:19:26.872545 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.872592 kubelet[3470]: W0120 01:19:26.872556 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.872592 kubelet[3470]: E0120 01:19:26.872565 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.872774 kubelet[3470]: E0120 01:19:26.872752 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.873033 kubelet[3470]: W0120 01:19:26.872768 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.873073 kubelet[3470]: E0120 01:19:26.873037 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.873246 kubelet[3470]: E0120 01:19:26.873231 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.873246 kubelet[3470]: W0120 01:19:26.873243 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.873307 kubelet[3470]: E0120 01:19:26.873253 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.873689 kubelet[3470]: E0120 01:19:26.873670 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.873689 kubelet[3470]: W0120 01:19:26.873686 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.873765 kubelet[3470]: E0120 01:19:26.873697 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.874027 kubelet[3470]: E0120 01:19:26.873940 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.874027 kubelet[3470]: W0120 01:19:26.873974 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.874027 kubelet[3470]: E0120 01:19:26.873982 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:26.885681 kubelet[3470]: E0120 01:19:26.885550 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:26.885681 kubelet[3470]: W0120 01:19:26.885566 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:26.885681 kubelet[3470]: E0120 01:19:26.885578 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:27.749222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470207882.mount: Deactivated successfully. Jan 20 01:19:28.182706 containerd[1915]: time="2026-01-20T01:19:28.182581326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:28.184839 containerd[1915]: time="2026-01-20T01:19:28.184813757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 20 01:19:28.187587 containerd[1915]: time="2026-01-20T01:19:28.187546450Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:28.191521 containerd[1915]: time="2026-01-20T01:19:28.191141199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:28.191624 containerd[1915]: time="2026-01-20T01:19:28.191607212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.488356748s" Jan 20 01:19:28.191680 containerd[1915]: time="2026-01-20T01:19:28.191670862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 20 01:19:28.192664 containerd[1915]: time="2026-01-20T01:19:28.192644497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 01:19:28.205690 containerd[1915]: time="2026-01-20T01:19:28.205665520Z" level=info msg="CreateContainer within sandbox \"ade628f9a51b04482e60d558a311d1c44228d10f29703f73f0726c5a7e03ff94\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 01:19:28.218393 containerd[1915]: time="2026-01-20T01:19:28.218363597Z" level=info msg="Container 161785f3b057759d11aed07d62aee2fe71b758f04f6ada4ee1785871a8c2c5c3: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:28.234992 containerd[1915]: time="2026-01-20T01:19:28.234948607Z" level=info msg="CreateContainer within sandbox \"ade628f9a51b04482e60d558a311d1c44228d10f29703f73f0726c5a7e03ff94\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"161785f3b057759d11aed07d62aee2fe71b758f04f6ada4ee1785871a8c2c5c3\"" Jan 20 01:19:28.235384 containerd[1915]: time="2026-01-20T01:19:28.235360195Z" level=info msg="StartContainer for \"161785f3b057759d11aed07d62aee2fe71b758f04f6ada4ee1785871a8c2c5c3\"" Jan 20 01:19:28.236269 containerd[1915]: time="2026-01-20T01:19:28.236244708Z" level=info msg="connecting to shim 161785f3b057759d11aed07d62aee2fe71b758f04f6ada4ee1785871a8c2c5c3" address="unix:///run/containerd/s/442308b4ca16d4b4297021b7790a1d7e995032c0e2b477e89ba4f3d1a836063b" protocol=ttrpc version=3 Jan 20 01:19:28.252607 systemd[1]: Started cri-containerd-161785f3b057759d11aed07d62aee2fe71b758f04f6ada4ee1785871a8c2c5c3.scope - libcontainer container 161785f3b057759d11aed07d62aee2fe71b758f04f6ada4ee1785871a8c2c5c3. Jan 20 01:19:28.288050 containerd[1915]: time="2026-01-20T01:19:28.288013668Z" level=info msg="StartContainer for \"161785f3b057759d11aed07d62aee2fe71b758f04f6ada4ee1785871a8c2c5c3\" returns successfully" Jan 20 01:19:28.659732 kubelet[3470]: E0120 01:19:28.659306 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:19:28.735029 kubelet[3470]: I0120 01:19:28.734974 3470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55cdc485b-kxrzs" podStartSLOduration=1.244720192 podStartE2EDuration="2.734936408s" podCreationTimestamp="2026-01-20 01:19:26 +0000 UTC" firstStartedPulling="2026-01-20 01:19:26.702180042 +0000 UTC m=+21.150493619" lastFinishedPulling="2026-01-20 01:19:28.192396266 +0000 UTC m=+22.640709835" observedRunningTime="2026-01-20 01:19:28.734158994 +0000 UTC m=+23.182472563" watchObservedRunningTime="2026-01-20 01:19:28.734936408 +0000 UTC m=+23.183249977" Jan 20 01:19:28.755592 kubelet[3470]: E0120 01:19:28.755529 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.755592 kubelet[3470]: W0120 01:19:28.755544 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.755592 kubelet[3470]: E0120 01:19:28.755558 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.755904 kubelet[3470]: E0120 01:19:28.755847 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.755904 kubelet[3470]: W0120 01:19:28.755858 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.755904 kubelet[3470]: E0120 01:19:28.755869 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.756162 kubelet[3470]: E0120 01:19:28.756110 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.756162 kubelet[3470]: W0120 01:19:28.756120 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.756162 kubelet[3470]: E0120 01:19:28.756128 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.756372 kubelet[3470]: E0120 01:19:28.756361 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.756473 kubelet[3470]: W0120 01:19:28.756416 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.756473 kubelet[3470]: E0120 01:19:28.756430 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.756746 kubelet[3470]: E0120 01:19:28.756692 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.756746 kubelet[3470]: W0120 01:19:28.756704 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.756746 kubelet[3470]: E0120 01:19:28.756713 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.757035 kubelet[3470]: E0120 01:19:28.756980 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.757035 kubelet[3470]: W0120 01:19:28.756993 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.757035 kubelet[3470]: E0120 01:19:28.757004 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.757326 kubelet[3470]: E0120 01:19:28.757315 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.757459 kubelet[3470]: W0120 01:19:28.757361 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.757459 kubelet[3470]: E0120 01:19:28.757374 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.757750 kubelet[3470]: E0120 01:19:28.757690 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.757750 kubelet[3470]: W0120 01:19:28.757701 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.757750 kubelet[3470]: E0120 01:19:28.757710 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.758069 kubelet[3470]: E0120 01:19:28.758018 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.758069 kubelet[3470]: W0120 01:19:28.758029 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.758069 kubelet[3470]: E0120 01:19:28.758039 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.758350 kubelet[3470]: E0120 01:19:28.758299 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.758350 kubelet[3470]: W0120 01:19:28.758310 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.758350 kubelet[3470]: E0120 01:19:28.758320 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.758609 kubelet[3470]: E0120 01:19:28.758557 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.758609 kubelet[3470]: W0120 01:19:28.758568 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.758609 kubelet[3470]: E0120 01:19:28.758577 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.758877 kubelet[3470]: E0120 01:19:28.758823 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.758877 kubelet[3470]: W0120 01:19:28.758833 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.758877 kubelet[3470]: E0120 01:19:28.758842 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.759160 kubelet[3470]: E0120 01:19:28.759109 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.759160 kubelet[3470]: W0120 01:19:28.759119 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.759160 kubelet[3470]: E0120 01:19:28.759128 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.759377 kubelet[3470]: E0120 01:19:28.759365 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.759472 kubelet[3470]: W0120 01:19:28.759424 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.759472 kubelet[3470]: E0120 01:19:28.759437 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.759746 kubelet[3470]: E0120 01:19:28.759692 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.759746 kubelet[3470]: W0120 01:19:28.759702 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.759746 kubelet[3470]: E0120 01:19:28.759712 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.783112 kubelet[3470]: E0120 01:19:28.783056 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.783112 kubelet[3470]: W0120 01:19:28.783068 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.783112 kubelet[3470]: E0120 01:19:28.783078 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.783315 kubelet[3470]: E0120 01:19:28.783285 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.783315 kubelet[3470]: W0120 01:19:28.783298 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.783315 kubelet[3470]: E0120 01:19:28.783308 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.783484 kubelet[3470]: E0120 01:19:28.783470 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.783484 kubelet[3470]: W0120 01:19:28.783479 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.783484 kubelet[3470]: E0120 01:19:28.783487 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.783864 kubelet[3470]: E0120 01:19:28.783713 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.783864 kubelet[3470]: W0120 01:19:28.783722 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.783864 kubelet[3470]: E0120 01:19:28.783741 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.783990 kubelet[3470]: E0120 01:19:28.783876 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.783990 kubelet[3470]: W0120 01:19:28.783887 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.783990 kubelet[3470]: E0120 01:19:28.783896 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.784032 kubelet[3470]: E0120 01:19:28.784006 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.784032 kubelet[3470]: W0120 01:19:28.784012 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.784306 kubelet[3470]: E0120 01:19:28.784019 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.784306 kubelet[3470]: E0120 01:19:28.784280 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.784306 kubelet[3470]: W0120 01:19:28.784290 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.784306 kubelet[3470]: E0120 01:19:28.784299 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.784683 kubelet[3470]: E0120 01:19:28.784667 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.784683 kubelet[3470]: W0120 01:19:28.784679 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.784683 kubelet[3470]: E0120 01:19:28.784687 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.784888 kubelet[3470]: E0120 01:19:28.784788 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.784888 kubelet[3470]: W0120 01:19:28.784793 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.784888 kubelet[3470]: E0120 01:19:28.784799 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.784888 kubelet[3470]: E0120 01:19:28.784876 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.784888 kubelet[3470]: W0120 01:19:28.784880 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.784888 kubelet[3470]: E0120 01:19:28.784885 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.784975 kubelet[3470]: E0120 01:19:28.784949 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.784975 kubelet[3470]: W0120 01:19:28.784953 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.784975 kubelet[3470]: E0120 01:19:28.784957 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.785125 kubelet[3470]: E0120 01:19:28.785091 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.785125 kubelet[3470]: W0120 01:19:28.785123 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.785397 kubelet[3470]: E0120 01:19:28.785132 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.785397 kubelet[3470]: E0120 01:19:28.785242 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.785397 kubelet[3470]: W0120 01:19:28.785247 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.785397 kubelet[3470]: E0120 01:19:28.785253 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.785397 kubelet[3470]: E0120 01:19:28.785332 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.785397 kubelet[3470]: W0120 01:19:28.785336 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.785397 kubelet[3470]: E0120 01:19:28.785341 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.785514 kubelet[3470]: E0120 01:19:28.785435 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.785514 kubelet[3470]: W0120 01:19:28.785440 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.785514 kubelet[3470]: E0120 01:19:28.785446 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.785859 kubelet[3470]: E0120 01:19:28.785714 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.785859 kubelet[3470]: W0120 01:19:28.785732 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.785859 kubelet[3470]: E0120 01:19:28.785743 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.786178 kubelet[3470]: E0120 01:19:28.786151 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.786178 kubelet[3470]: W0120 01:19:28.786163 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.786178 kubelet[3470]: E0120 01:19:28.786172 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:28.786343 kubelet[3470]: E0120 01:19:28.786316 3470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:19:28.786343 kubelet[3470]: W0120 01:19:28.786326 3470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:19:28.786343 kubelet[3470]: E0120 01:19:28.786335 3470 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:19:29.262529 containerd[1915]: time="2026-01-20T01:19:29.262273586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:29.264012 containerd[1915]: time="2026-01-20T01:19:29.263987107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 20 01:19:29.266212 containerd[1915]: time="2026-01-20T01:19:29.266187624Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:29.269167 containerd[1915]: time="2026-01-20T01:19:29.269141524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:29.269807 containerd[1915]: time="2026-01-20T01:19:29.269781942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.077052577s" Jan 20 01:19:29.269841 containerd[1915]: time="2026-01-20T01:19:29.269810390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 20 01:19:29.275151 containerd[1915]: time="2026-01-20T01:19:29.274827668Z" level=info msg="CreateContainer within sandbox \"43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 01:19:29.291894 containerd[1915]: time="2026-01-20T01:19:29.291869651Z" level=info msg="Container 6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:29.305655 containerd[1915]: time="2026-01-20T01:19:29.305625590Z" level=info msg="CreateContainer within sandbox \"43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65\"" Jan 20 01:19:29.306252 containerd[1915]: time="2026-01-20T01:19:29.306139388Z" level=info msg="StartContainer for \"6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65\"" Jan 20 01:19:29.308509 containerd[1915]: time="2026-01-20T01:19:29.308078419Z" level=info msg="connecting to shim 6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65" address="unix:///run/containerd/s/5e27c42e1c0221608234c3cc6f2f919fbad2ad3751ec7a103421cb55110c76ac" protocol=ttrpc version=3 Jan 20 01:19:29.323618 systemd[1]: Started cri-containerd-6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65.scope - libcontainer container 6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65. Jan 20 01:19:29.378054 containerd[1915]: time="2026-01-20T01:19:29.378002642Z" level=info msg="StartContainer for \"6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65\" returns successfully" Jan 20 01:19:29.386065 systemd[1]: cri-containerd-6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65.scope: Deactivated successfully. Jan 20 01:19:29.389992 containerd[1915]: time="2026-01-20T01:19:29.389952010Z" level=info msg="received container exit event container_id:\"6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65\" id:\"6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65\" pid:4130 exited_at:{seconds:1768871969 nanos:389042408}" Jan 20 01:19:29.406179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6866f7b4a656fb900293a5b1fc2b4b23fa0ec5b8c724aef4ac77a0d570f0bd65-rootfs.mount: Deactivated successfully. Jan 20 01:19:29.725894 kubelet[3470]: I0120 01:19:29.725830 3470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:19:30.659538 kubelet[3470]: E0120 01:19:30.659046 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:19:30.729671 containerd[1915]: time="2026-01-20T01:19:30.729629975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 01:19:32.659428 kubelet[3470]: E0120 01:19:32.659377 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:19:32.669081 containerd[1915]: time="2026-01-20T01:19:32.669035044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:32.671960 containerd[1915]: time="2026-01-20T01:19:32.671846510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 20 01:19:32.674027 containerd[1915]: time="2026-01-20T01:19:32.673992654Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:32.676559 containerd[1915]: time="2026-01-20T01:19:32.676461511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:32.677329 containerd[1915]: time="2026-01-20T01:19:32.677026638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 1.947365646s" Jan 20 01:19:32.677329 containerd[1915]: time="2026-01-20T01:19:32.677053631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 20 01:19:32.682365 containerd[1915]: time="2026-01-20T01:19:32.682338561Z" level=info msg="CreateContainer within sandbox \"43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:19:32.699166 containerd[1915]: time="2026-01-20T01:19:32.699094480Z" level=info msg="Container 50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:32.711826 containerd[1915]: time="2026-01-20T01:19:32.711725475Z" level=info msg="CreateContainer within sandbox \"43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3\"" Jan 20 01:19:32.712117 containerd[1915]: time="2026-01-20T01:19:32.712099757Z" level=info msg="StartContainer for \"50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3\"" Jan 20 01:19:32.713237 containerd[1915]: time="2026-01-20T01:19:32.713215666Z" level=info msg="connecting to shim 50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3" address="unix:///run/containerd/s/5e27c42e1c0221608234c3cc6f2f919fbad2ad3751ec7a103421cb55110c76ac" protocol=ttrpc version=3 Jan 20 01:19:32.734624 systemd[1]: Started cri-containerd-50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3.scope - libcontainer container 50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3. Jan 20 01:19:32.793456 containerd[1915]: time="2026-01-20T01:19:32.793324318Z" level=info msg="StartContainer for \"50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3\" returns successfully" Jan 20 01:19:33.894604 systemd[1]: cri-containerd-50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3.scope: Deactivated successfully. Jan 20 01:19:33.896269 systemd[1]: cri-containerd-50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3.scope: Consumed 319ms CPU time, 193.4M memory peak, 165.9M written to disk. Jan 20 01:19:33.897083 containerd[1915]: time="2026-01-20T01:19:33.897036777Z" level=info msg="received container exit event container_id:\"50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3\" id:\"50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3\" pid:4190 exited_at:{seconds:1768871973 nanos:896598950}" Jan 20 01:19:33.911691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50a402f6a3d4fa9483fe1c4f974524008865286ebb230322d0479f7c667218a3-rootfs.mount: Deactivated successfully. Jan 20 01:19:33.954958 kubelet[3470]: I0120 01:19:33.954936 3470 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 20 01:19:34.680272 systemd[1]: Created slice kubepods-besteffort-podb0ce4d6d_0160_4871_9c3a_73730559c915.slice - libcontainer container kubepods-besteffort-podb0ce4d6d_0160_4871_9c3a_73730559c915.slice. Jan 20 01:19:34.720549 kubelet[3470]: I0120 01:19:34.720488 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx7nn\" (UniqueName: \"kubernetes.io/projected/b0ce4d6d-0160-4871-9c3a-73730559c915-kube-api-access-zx7nn\") pod \"calico-apiserver-84b6496599-bdfgs\" (UID: \"b0ce4d6d-0160-4871-9c3a-73730559c915\") " pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" Jan 20 01:19:34.720549 kubelet[3470]: I0120 01:19:34.720536 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b0ce4d6d-0160-4871-9c3a-73730559c915-calico-apiserver-certs\") pod \"calico-apiserver-84b6496599-bdfgs\" (UID: \"b0ce4d6d-0160-4871-9c3a-73730559c915\") " pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" Jan 20 01:19:34.749301 systemd[1]: Created slice kubepods-burstable-pode0b6976f_5bf9_4b17_8633_cc09ab7ecd6f.slice - libcontainer container kubepods-burstable-pode0b6976f_5bf9_4b17_8633_cc09ab7ecd6f.slice. Jan 20 01:19:34.758742 systemd[1]: Created slice kubepods-besteffort-pod8fa1e625_99ce_4678_80e8_ad10255fcf43.slice - libcontainer container kubepods-besteffort-pod8fa1e625_99ce_4678_80e8_ad10255fcf43.slice. Jan 20 01:19:34.767167 systemd[1]: Created slice kubepods-besteffort-podcb9cb441_7037_4528_82fe_bf285eddd3a8.slice - libcontainer container kubepods-besteffort-podcb9cb441_7037_4528_82fe_bf285eddd3a8.slice. Jan 20 01:19:34.770314 containerd[1915]: time="2026-01-20T01:19:34.770274115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2cs6z,Uid:8fa1e625-99ce-4678-80e8-ad10255fcf43,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:34.773028 systemd[1]: Created slice kubepods-besteffort-pod07b4036f_53cd_480d_a4eb_8badfec721c3.slice - libcontainer container kubepods-besteffort-pod07b4036f_53cd_480d_a4eb_8badfec721c3.slice. Jan 20 01:19:34.787823 systemd[1]: Created slice kubepods-burstable-pode017f9ec_f165_40df_8583_e2bb3341b01b.slice - libcontainer container kubepods-burstable-pode017f9ec_f165_40df_8583_e2bb3341b01b.slice. Jan 20 01:19:34.798425 systemd[1]: Created slice kubepods-besteffort-pod394686c1_b41d_41ec_8fb4_e2ecac3e5f25.slice - libcontainer container kubepods-besteffort-pod394686c1_b41d_41ec_8fb4_e2ecac3e5f25.slice. Jan 20 01:19:34.805952 systemd[1]: Created slice kubepods-besteffort-podb6739b36_c8fc_46a6_8652_d6a5a25da0c2.slice - libcontainer container kubepods-besteffort-podb6739b36_c8fc_46a6_8652_d6a5a25da0c2.slice. Jan 20 01:19:34.814787 systemd[1]: Created slice kubepods-besteffort-pode522ab69_60c4_4bed_bd35_afe9cd973ba9.slice - libcontainer container kubepods-besteffort-pode522ab69_60c4_4bed_bd35_afe9cd973ba9.slice. Jan 20 01:19:34.820855 kubelet[3470]: I0120 01:19:34.820813 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hsmc\" (UniqueName: \"kubernetes.io/projected/07b4036f-53cd-480d-a4eb-8badfec721c3-kube-api-access-6hsmc\") pod \"calico-kube-controllers-7d584bf5d7-gtntj\" (UID: \"07b4036f-53cd-480d-a4eb-8badfec721c3\") " pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" Jan 20 01:19:34.821029 kubelet[3470]: I0120 01:19:34.820978 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e522ab69-60c4-4bed-bd35-afe9cd973ba9-calico-apiserver-certs\") pod \"calico-apiserver-7465b9f86b-8bs6q\" (UID: \"e522ab69-60c4-4bed-bd35-afe9cd973ba9\") " pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" Jan 20 01:19:34.821163 kubelet[3470]: I0120 01:19:34.821099 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6739b36-c8fc-46a6-8652-d6a5a25da0c2-calico-apiserver-certs\") pod \"calico-apiserver-84b6496599-7q94c\" (UID: \"b6739b36-c8fc-46a6-8652-d6a5a25da0c2\") " pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" Jan 20 01:19:34.821163 kubelet[3470]: I0120 01:19:34.821119 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb9cb441-7037-4528-82fe-bf285eddd3a8-whisker-ca-bundle\") pod \"whisker-f444c78d4-ddx64\" (UID: \"cb9cb441-7037-4528-82fe-bf285eddd3a8\") " pod="calico-system/whisker-f444c78d4-ddx64" Jan 20 01:19:34.821320 kubelet[3470]: I0120 01:19:34.821224 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm5fp\" (UniqueName: \"kubernetes.io/projected/e017f9ec-f165-40df-8583-e2bb3341b01b-kube-api-access-bm5fp\") pod \"coredns-66bc5c9577-ltwrz\" (UID: \"e017f9ec-f165-40df-8583-e2bb3341b01b\") " pod="kube-system/coredns-66bc5c9577-ltwrz" Jan 20 01:19:34.821320 kubelet[3470]: I0120 01:19:34.821259 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5xjz\" (UniqueName: \"kubernetes.io/projected/e522ab69-60c4-4bed-bd35-afe9cd973ba9-kube-api-access-m5xjz\") pod \"calico-apiserver-7465b9f86b-8bs6q\" (UID: \"e522ab69-60c4-4bed-bd35-afe9cd973ba9\") " pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" Jan 20 01:19:34.821320 kubelet[3470]: I0120 01:19:34.821272 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/394686c1-b41d-41ec-8fb4-e2ecac3e5f25-goldmane-key-pair\") pod \"goldmane-7c778bb748-dwz6j\" (UID: \"394686c1-b41d-41ec-8fb4-e2ecac3e5f25\") " pod="calico-system/goldmane-7c778bb748-dwz6j" Jan 20 01:19:34.821320 kubelet[3470]: I0120 01:19:34.821282 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf74q\" (UniqueName: \"kubernetes.io/projected/e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f-kube-api-access-gf74q\") pod \"coredns-66bc5c9577-6gnsc\" (UID: \"e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f\") " pod="kube-system/coredns-66bc5c9577-6gnsc" Jan 20 01:19:34.821569 kubelet[3470]: I0120 01:19:34.821407 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb9cb441-7037-4528-82fe-bf285eddd3a8-whisker-backend-key-pair\") pod \"whisker-f444c78d4-ddx64\" (UID: \"cb9cb441-7037-4528-82fe-bf285eddd3a8\") " pod="calico-system/whisker-f444c78d4-ddx64" Jan 20 01:19:34.821569 kubelet[3470]: I0120 01:19:34.821425 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07b4036f-53cd-480d-a4eb-8badfec721c3-tigera-ca-bundle\") pod \"calico-kube-controllers-7d584bf5d7-gtntj\" (UID: \"07b4036f-53cd-480d-a4eb-8badfec721c3\") " pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" Jan 20 01:19:34.821569 kubelet[3470]: I0120 01:19:34.821439 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394686c1-b41d-41ec-8fb4-e2ecac3e5f25-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-dwz6j\" (UID: \"394686c1-b41d-41ec-8fb4-e2ecac3e5f25\") " pod="calico-system/goldmane-7c778bb748-dwz6j" Jan 20 01:19:34.821569 kubelet[3470]: I0120 01:19:34.821448 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f-config-volume\") pod \"coredns-66bc5c9577-6gnsc\" (UID: \"e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f\") " pod="kube-system/coredns-66bc5c9577-6gnsc" Jan 20 01:19:34.821569 kubelet[3470]: I0120 01:19:34.821458 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rmrv\" (UniqueName: \"kubernetes.io/projected/cb9cb441-7037-4528-82fe-bf285eddd3a8-kube-api-access-6rmrv\") pod \"whisker-f444c78d4-ddx64\" (UID: \"cb9cb441-7037-4528-82fe-bf285eddd3a8\") " pod="calico-system/whisker-f444c78d4-ddx64" Jan 20 01:19:34.821872 kubelet[3470]: I0120 01:19:34.821635 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e017f9ec-f165-40df-8583-e2bb3341b01b-config-volume\") pod \"coredns-66bc5c9577-ltwrz\" (UID: \"e017f9ec-f165-40df-8583-e2bb3341b01b\") " pod="kube-system/coredns-66bc5c9577-ltwrz" Jan 20 01:19:34.821872 kubelet[3470]: I0120 01:19:34.821655 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndc4\" (UniqueName: \"kubernetes.io/projected/b6739b36-c8fc-46a6-8652-d6a5a25da0c2-kube-api-access-rndc4\") pod \"calico-apiserver-84b6496599-7q94c\" (UID: \"b6739b36-c8fc-46a6-8652-d6a5a25da0c2\") " pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" Jan 20 01:19:34.821872 kubelet[3470]: I0120 01:19:34.821666 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7c5x\" (UniqueName: \"kubernetes.io/projected/394686c1-b41d-41ec-8fb4-e2ecac3e5f25-kube-api-access-p7c5x\") pod \"goldmane-7c778bb748-dwz6j\" (UID: \"394686c1-b41d-41ec-8fb4-e2ecac3e5f25\") " pod="calico-system/goldmane-7c778bb748-dwz6j" Jan 20 01:19:34.821872 kubelet[3470]: I0120 01:19:34.821791 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394686c1-b41d-41ec-8fb4-e2ecac3e5f25-config\") pod \"goldmane-7c778bb748-dwz6j\" (UID: \"394686c1-b41d-41ec-8fb4-e2ecac3e5f25\") " pod="calico-system/goldmane-7c778bb748-dwz6j" Jan 20 01:19:34.856199 containerd[1915]: time="2026-01-20T01:19:34.856151206Z" level=error msg="Failed to destroy network for sandbox \"4290cc8a8e32e1befc3b1f7fe847b53890b595dd50798eb670f598adb0a1958e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:34.857587 kubelet[3470]: I0120 01:19:34.856834 3470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:19:34.858142 systemd[1]: run-netns-cni\x2d56caf133\x2d35b4\x2d27e3\x2d1476\x2d4b00523ac963.mount: Deactivated successfully. Jan 20 01:19:34.860545 containerd[1915]: time="2026-01-20T01:19:34.860428886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2cs6z,Uid:8fa1e625-99ce-4678-80e8-ad10255fcf43,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4290cc8a8e32e1befc3b1f7fe847b53890b595dd50798eb670f598adb0a1958e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:34.860758 kubelet[3470]: E0120 01:19:34.860731 3470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4290cc8a8e32e1befc3b1f7fe847b53890b595dd50798eb670f598adb0a1958e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:34.860853 kubelet[3470]: E0120 01:19:34.860840 3470 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4290cc8a8e32e1befc3b1f7fe847b53890b595dd50798eb670f598adb0a1958e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2cs6z" Jan 20 01:19:34.860940 kubelet[3470]: E0120 01:19:34.860902 3470 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4290cc8a8e32e1befc3b1f7fe847b53890b595dd50798eb670f598adb0a1958e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2cs6z" Jan 20 01:19:34.861066 kubelet[3470]: E0120 01:19:34.860992 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4290cc8a8e32e1befc3b1f7fe847b53890b595dd50798eb670f598adb0a1958e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:19:34.988643 containerd[1915]: time="2026-01-20T01:19:34.988618343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6496599-bdfgs,Uid:b0ce4d6d-0160-4871-9c3a-73730559c915,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:19:35.023098 containerd[1915]: time="2026-01-20T01:19:35.023069534Z" level=error msg="Failed to destroy network for sandbox \"c336d6e4e4762874f68c32847b5e9c06849cfa3dbbefaeb419078498662e7ece\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.025474 containerd[1915]: time="2026-01-20T01:19:35.025448820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6496599-bdfgs,Uid:b0ce4d6d-0160-4871-9c3a-73730559c915,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c336d6e4e4762874f68c32847b5e9c06849cfa3dbbefaeb419078498662e7ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.025787 kubelet[3470]: E0120 01:19:35.025751 3470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c336d6e4e4762874f68c32847b5e9c06849cfa3dbbefaeb419078498662e7ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.025984 kubelet[3470]: E0120 01:19:35.025796 3470 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c336d6e4e4762874f68c32847b5e9c06849cfa3dbbefaeb419078498662e7ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" Jan 20 01:19:35.025984 kubelet[3470]: E0120 01:19:35.025809 3470 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c336d6e4e4762874f68c32847b5e9c06849cfa3dbbefaeb419078498662e7ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" Jan 20 01:19:35.025984 kubelet[3470]: E0120 01:19:35.025848 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b6496599-bdfgs_calico-apiserver(b0ce4d6d-0160-4871-9c3a-73730559c915)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b6496599-bdfgs_calico-apiserver(b0ce4d6d-0160-4871-9c3a-73730559c915)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c336d6e4e4762874f68c32847b5e9c06849cfa3dbbefaeb419078498662e7ece\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:19:35.058113 containerd[1915]: time="2026-01-20T01:19:35.058084251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6gnsc,Uid:e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:35.074631 containerd[1915]: time="2026-01-20T01:19:35.074606533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f444c78d4-ddx64,Uid:cb9cb441-7037-4528-82fe-bf285eddd3a8,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:35.083480 containerd[1915]: time="2026-01-20T01:19:35.083454637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d584bf5d7-gtntj,Uid:07b4036f-53cd-480d-a4eb-8badfec721c3,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:35.093638 containerd[1915]: time="2026-01-20T01:19:35.093609023Z" level=error msg="Failed to destroy network for sandbox \"753b3ce1c6e935d82fcfce49d14772733e2db49b4fb4251f673195178c67ad62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.100986 containerd[1915]: time="2026-01-20T01:19:35.100955519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ltwrz,Uid:e017f9ec-f165-40df-8583-e2bb3341b01b,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:35.101613 containerd[1915]: time="2026-01-20T01:19:35.101585408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6gnsc,Uid:e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"753b3ce1c6e935d82fcfce49d14772733e2db49b4fb4251f673195178c67ad62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.102206 kubelet[3470]: E0120 01:19:35.101943 3470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"753b3ce1c6e935d82fcfce49d14772733e2db49b4fb4251f673195178c67ad62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.102206 kubelet[3470]: E0120 01:19:35.101999 3470 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"753b3ce1c6e935d82fcfce49d14772733e2db49b4fb4251f673195178c67ad62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6gnsc" Jan 20 01:19:35.102206 kubelet[3470]: E0120 01:19:35.102012 3470 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"753b3ce1c6e935d82fcfce49d14772733e2db49b4fb4251f673195178c67ad62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6gnsc" Jan 20 01:19:35.102363 kubelet[3470]: E0120 01:19:35.102049 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6gnsc_kube-system(e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6gnsc_kube-system(e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"753b3ce1c6e935d82fcfce49d14772733e2db49b4fb4251f673195178c67ad62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6gnsc" podUID="e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f" Jan 20 01:19:35.106528 containerd[1915]: time="2026-01-20T01:19:35.106488960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dwz6j,Uid:394686c1-b41d-41ec-8fb4-e2ecac3e5f25,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:35.117227 containerd[1915]: time="2026-01-20T01:19:35.117207497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6496599-7q94c,Uid:b6739b36-c8fc-46a6-8652-d6a5a25da0c2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:19:35.120444 containerd[1915]: time="2026-01-20T01:19:35.120423710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7465b9f86b-8bs6q,Uid:e522ab69-60c4-4bed-bd35-afe9cd973ba9,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:19:35.156600 containerd[1915]: time="2026-01-20T01:19:35.156561337Z" level=error msg="Failed to destroy network for sandbox \"ac4c826cc517cfe971e2541ae65e063584c18273b1280fbb449a85824769cdf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.159110 containerd[1915]: time="2026-01-20T01:19:35.159077395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f444c78d4-ddx64,Uid:cb9cb441-7037-4528-82fe-bf285eddd3a8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4c826cc517cfe971e2541ae65e063584c18273b1280fbb449a85824769cdf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.159355 kubelet[3470]: E0120 01:19:35.159327 3470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4c826cc517cfe971e2541ae65e063584c18273b1280fbb449a85824769cdf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.159429 kubelet[3470]: E0120 01:19:35.159366 3470 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4c826cc517cfe971e2541ae65e063584c18273b1280fbb449a85824769cdf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f444c78d4-ddx64" Jan 20 01:19:35.159429 kubelet[3470]: E0120 01:19:35.159381 3470 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4c826cc517cfe971e2541ae65e063584c18273b1280fbb449a85824769cdf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f444c78d4-ddx64" Jan 20 01:19:35.159429 kubelet[3470]: E0120 01:19:35.159419 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f444c78d4-ddx64_calico-system(cb9cb441-7037-4528-82fe-bf285eddd3a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f444c78d4-ddx64_calico-system(cb9cb441-7037-4528-82fe-bf285eddd3a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac4c826cc517cfe971e2541ae65e063584c18273b1280fbb449a85824769cdf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f444c78d4-ddx64" podUID="cb9cb441-7037-4528-82fe-bf285eddd3a8" Jan 20 01:19:35.177753 containerd[1915]: time="2026-01-20T01:19:35.177475173Z" level=error msg="Failed to destroy network for sandbox \"19dfe1191df6d268b00735dc9b89bb20218e3cd1a8a6cbbf1159b01ce63b8c61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.180818 containerd[1915]: time="2026-01-20T01:19:35.180786684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d584bf5d7-gtntj,Uid:07b4036f-53cd-480d-a4eb-8badfec721c3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19dfe1191df6d268b00735dc9b89bb20218e3cd1a8a6cbbf1159b01ce63b8c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.181003 kubelet[3470]: E0120 01:19:35.180969 3470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19dfe1191df6d268b00735dc9b89bb20218e3cd1a8a6cbbf1159b01ce63b8c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.181089 kubelet[3470]: E0120 01:19:35.181008 3470 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19dfe1191df6d268b00735dc9b89bb20218e3cd1a8a6cbbf1159b01ce63b8c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" Jan 20 01:19:35.181089 kubelet[3470]: E0120 01:19:35.181022 3470 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19dfe1191df6d268b00735dc9b89bb20218e3cd1a8a6cbbf1159b01ce63b8c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" Jan 20 01:19:35.181089 kubelet[3470]: E0120 01:19:35.181067 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d584bf5d7-gtntj_calico-system(07b4036f-53cd-480d-a4eb-8badfec721c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d584bf5d7-gtntj_calico-system(07b4036f-53cd-480d-a4eb-8badfec721c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19dfe1191df6d268b00735dc9b89bb20218e3cd1a8a6cbbf1159b01ce63b8c61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:19:35.186447 containerd[1915]: time="2026-01-20T01:19:35.186415175Z" level=error msg="Failed to destroy network for sandbox \"f9a950c3544ac34daec052980cc0cfcaebfcb58cbaba3e1ab927e0675bd872ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.189131 containerd[1915]: time="2026-01-20T01:19:35.189052237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ltwrz,Uid:e017f9ec-f165-40df-8583-e2bb3341b01b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a950c3544ac34daec052980cc0cfcaebfcb58cbaba3e1ab927e0675bd872ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.189246 kubelet[3470]: E0120 01:19:35.189194 3470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a950c3544ac34daec052980cc0cfcaebfcb58cbaba3e1ab927e0675bd872ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.189246 kubelet[3470]: E0120 01:19:35.189226 3470 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a950c3544ac34daec052980cc0cfcaebfcb58cbaba3e1ab927e0675bd872ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ltwrz" Jan 20 01:19:35.189246 kubelet[3470]: E0120 01:19:35.189240 3470 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a950c3544ac34daec052980cc0cfcaebfcb58cbaba3e1ab927e0675bd872ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ltwrz" Jan 20 01:19:35.189761 kubelet[3470]: E0120 01:19:35.189280 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ltwrz_kube-system(e017f9ec-f165-40df-8583-e2bb3341b01b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ltwrz_kube-system(e017f9ec-f165-40df-8583-e2bb3341b01b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9a950c3544ac34daec052980cc0cfcaebfcb58cbaba3e1ab927e0675bd872ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ltwrz" podUID="e017f9ec-f165-40df-8583-e2bb3341b01b" Jan 20 01:19:35.196401 containerd[1915]: time="2026-01-20T01:19:35.196367372Z" level=error msg="Failed to destroy network for sandbox \"bedafd399b3495cc290d22f684873a470a64d899ae0738771b104ef93b08b0c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.198943 containerd[1915]: time="2026-01-20T01:19:35.198910063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dwz6j,Uid:394686c1-b41d-41ec-8fb4-e2ecac3e5f25,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bedafd399b3495cc290d22f684873a470a64d899ae0738771b104ef93b08b0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.199468 kubelet[3470]: E0120 01:19:35.199278 3470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bedafd399b3495cc290d22f684873a470a64d899ae0738771b104ef93b08b0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.199468 kubelet[3470]: E0120 01:19:35.199337 3470 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bedafd399b3495cc290d22f684873a470a64d899ae0738771b104ef93b08b0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dwz6j" Jan 20 01:19:35.199468 kubelet[3470]: E0120 01:19:35.199350 3470 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bedafd399b3495cc290d22f684873a470a64d899ae0738771b104ef93b08b0c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dwz6j" Jan 20 01:19:35.199600 kubelet[3470]: E0120 01:19:35.199380 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-dwz6j_calico-system(394686c1-b41d-41ec-8fb4-e2ecac3e5f25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-dwz6j_calico-system(394686c1-b41d-41ec-8fb4-e2ecac3e5f25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bedafd399b3495cc290d22f684873a470a64d899ae0738771b104ef93b08b0c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:19:35.213857 containerd[1915]: time="2026-01-20T01:19:35.213828358Z" level=error msg="Failed to destroy network for sandbox \"7d84f4a418403f261bc35c8b642c905e8d38ab2c230e48a7d327de434dd194b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.214558 containerd[1915]: time="2026-01-20T01:19:35.214534417Z" level=error msg="Failed to destroy network for sandbox \"687c481961b54b09f021f4c72e0695a452cd29cd5f8441c67dd9957a7f985f41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.216401 containerd[1915]: time="2026-01-20T01:19:35.216374361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6496599-7q94c,Uid:b6739b36-c8fc-46a6-8652-d6a5a25da0c2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d84f4a418403f261bc35c8b642c905e8d38ab2c230e48a7d327de434dd194b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.216592 kubelet[3470]: E0120 01:19:35.216572 3470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d84f4a418403f261bc35c8b642c905e8d38ab2c230e48a7d327de434dd194b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.216767 kubelet[3470]: E0120 01:19:35.216671 3470 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d84f4a418403f261bc35c8b642c905e8d38ab2c230e48a7d327de434dd194b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" Jan 20 01:19:35.216767 kubelet[3470]: E0120 01:19:35.216696 3470 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d84f4a418403f261bc35c8b642c905e8d38ab2c230e48a7d327de434dd194b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" Jan 20 01:19:35.216767 kubelet[3470]: E0120 01:19:35.216737 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b6496599-7q94c_calico-apiserver(b6739b36-c8fc-46a6-8652-d6a5a25da0c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b6496599-7q94c_calico-apiserver(b6739b36-c8fc-46a6-8652-d6a5a25da0c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d84f4a418403f261bc35c8b642c905e8d38ab2c230e48a7d327de434dd194b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:19:35.218321 containerd[1915]: time="2026-01-20T01:19:35.218238194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7465b9f86b-8bs6q,Uid:e522ab69-60c4-4bed-bd35-afe9cd973ba9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"687c481961b54b09f021f4c72e0695a452cd29cd5f8441c67dd9957a7f985f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.218488 kubelet[3470]: E0120 01:19:35.218443 3470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"687c481961b54b09f021f4c72e0695a452cd29cd5f8441c67dd9957a7f985f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:19:35.218773 kubelet[3470]: E0120 01:19:35.218748 3470 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"687c481961b54b09f021f4c72e0695a452cd29cd5f8441c67dd9957a7f985f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" Jan 20 01:19:35.218807 kubelet[3470]: E0120 01:19:35.218776 3470 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"687c481961b54b09f021f4c72e0695a452cd29cd5f8441c67dd9957a7f985f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" Jan 20 01:19:35.218827 kubelet[3470]: E0120 01:19:35.218815 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7465b9f86b-8bs6q_calico-apiserver(e522ab69-60c4-4bed-bd35-afe9cd973ba9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7465b9f86b-8bs6q_calico-apiserver(e522ab69-60c4-4bed-bd35-afe9cd973ba9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"687c481961b54b09f021f4c72e0695a452cd29cd5f8441c67dd9957a7f985f41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:19:35.754378 containerd[1915]: time="2026-01-20T01:19:35.754332006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 01:19:39.239653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159579254.mount: Deactivated successfully. Jan 20 01:19:39.546775 containerd[1915]: time="2026-01-20T01:19:39.546667529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:39.548535 containerd[1915]: time="2026-01-20T01:19:39.548508679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 20 01:19:39.550950 containerd[1915]: time="2026-01-20T01:19:39.550912373Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:39.556354 containerd[1915]: time="2026-01-20T01:19:39.556314482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:19:39.556828 containerd[1915]: time="2026-01-20T01:19:39.556686541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.802316654s" Jan 20 01:19:39.556828 containerd[1915]: time="2026-01-20T01:19:39.556716782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 20 01:19:39.576989 containerd[1915]: time="2026-01-20T01:19:39.576964371Z" level=info msg="CreateContainer within sandbox \"43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 01:19:39.595088 containerd[1915]: time="2026-01-20T01:19:39.594661039Z" level=info msg="Container 88401bbe8b25b0f407712cef0c61eb319cf1893c236f208a88a67b3e9d105ae4: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:39.608058 containerd[1915]: time="2026-01-20T01:19:39.608028836Z" level=info msg="CreateContainer within sandbox \"43cba119f7ab40947f08ada8737898f9a61af2ab3da64c8bb388e6c5a6eac4e3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"88401bbe8b25b0f407712cef0c61eb319cf1893c236f208a88a67b3e9d105ae4\"" Jan 20 01:19:39.609322 containerd[1915]: time="2026-01-20T01:19:39.608397135Z" level=info msg="StartContainer for \"88401bbe8b25b0f407712cef0c61eb319cf1893c236f208a88a67b3e9d105ae4\"" Jan 20 01:19:39.609521 containerd[1915]: time="2026-01-20T01:19:39.609485294Z" level=info msg="connecting to shim 88401bbe8b25b0f407712cef0c61eb319cf1893c236f208a88a67b3e9d105ae4" address="unix:///run/containerd/s/5e27c42e1c0221608234c3cc6f2f919fbad2ad3751ec7a103421cb55110c76ac" protocol=ttrpc version=3 Jan 20 01:19:39.628626 systemd[1]: Started cri-containerd-88401bbe8b25b0f407712cef0c61eb319cf1893c236f208a88a67b3e9d105ae4.scope - libcontainer container 88401bbe8b25b0f407712cef0c61eb319cf1893c236f208a88a67b3e9d105ae4. Jan 20 01:19:39.692568 containerd[1915]: time="2026-01-20T01:19:39.692542353Z" level=info msg="StartContainer for \"88401bbe8b25b0f407712cef0c61eb319cf1893c236f208a88a67b3e9d105ae4\" returns successfully" Jan 20 01:19:39.792695 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 01:19:39.792775 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 01:19:39.795265 kubelet[3470]: I0120 01:19:39.795215 3470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rkjqk" podStartSLOduration=1.079738299 podStartE2EDuration="13.794989305s" podCreationTimestamp="2026-01-20 01:19:26 +0000 UTC" firstStartedPulling="2026-01-20 01:19:26.842122387 +0000 UTC m=+21.290435964" lastFinishedPulling="2026-01-20 01:19:39.557373401 +0000 UTC m=+34.005686970" observedRunningTime="2026-01-20 01:19:39.794785627 +0000 UTC m=+34.243099212" watchObservedRunningTime="2026-01-20 01:19:39.794989305 +0000 UTC m=+34.243302874" Jan 20 01:19:39.954099 kubelet[3470]: I0120 01:19:39.954058 3470 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb9cb441-7037-4528-82fe-bf285eddd3a8-whisker-backend-key-pair\") pod \"cb9cb441-7037-4528-82fe-bf285eddd3a8\" (UID: \"cb9cb441-7037-4528-82fe-bf285eddd3a8\") " Jan 20 01:19:39.954099 kubelet[3470]: I0120 01:19:39.954103 3470 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmrv\" (UniqueName: \"kubernetes.io/projected/cb9cb441-7037-4528-82fe-bf285eddd3a8-kube-api-access-6rmrv\") pod \"cb9cb441-7037-4528-82fe-bf285eddd3a8\" (UID: \"cb9cb441-7037-4528-82fe-bf285eddd3a8\") " Jan 20 01:19:39.954099 kubelet[3470]: I0120 01:19:39.954119 3470 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb9cb441-7037-4528-82fe-bf285eddd3a8-whisker-ca-bundle\") pod \"cb9cb441-7037-4528-82fe-bf285eddd3a8\" (UID: \"cb9cb441-7037-4528-82fe-bf285eddd3a8\") " Jan 20 01:19:39.957004 kubelet[3470]: I0120 01:19:39.956849 3470 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9cb441-7037-4528-82fe-bf285eddd3a8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cb9cb441-7037-4528-82fe-bf285eddd3a8" (UID: "cb9cb441-7037-4528-82fe-bf285eddd3a8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:19:39.960119 kubelet[3470]: I0120 01:19:39.959990 3470 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9cb441-7037-4528-82fe-bf285eddd3a8-kube-api-access-6rmrv" (OuterVolumeSpecName: "kube-api-access-6rmrv") pod "cb9cb441-7037-4528-82fe-bf285eddd3a8" (UID: "cb9cb441-7037-4528-82fe-bf285eddd3a8"). InnerVolumeSpecName "kube-api-access-6rmrv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:19:39.960119 kubelet[3470]: I0120 01:19:39.960076 3470 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9cb441-7037-4528-82fe-bf285eddd3a8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cb9cb441-7037-4528-82fe-bf285eddd3a8" (UID: "cb9cb441-7037-4528-82fe-bf285eddd3a8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:19:40.055018 kubelet[3470]: I0120 01:19:40.054931 3470 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb9cb441-7037-4528-82fe-bf285eddd3a8-whisker-ca-bundle\") on node \"ci-4459.2.2-n-d40ac89f78\" DevicePath \"\"" Jan 20 01:19:40.055018 kubelet[3470]: I0120 01:19:40.054959 3470 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb9cb441-7037-4528-82fe-bf285eddd3a8-whisker-backend-key-pair\") on node \"ci-4459.2.2-n-d40ac89f78\" DevicePath \"\"" Jan 20 01:19:40.055018 kubelet[3470]: I0120 01:19:40.054967 3470 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmrv\" (UniqueName: \"kubernetes.io/projected/cb9cb441-7037-4528-82fe-bf285eddd3a8-kube-api-access-6rmrv\") on node \"ci-4459.2.2-n-d40ac89f78\" DevicePath \"\"" Jan 20 01:19:40.239656 systemd[1]: var-lib-kubelet-pods-cb9cb441\x2d7037\x2d4528\x2d82fe\x2dbf285eddd3a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6rmrv.mount: Deactivated successfully. Jan 20 01:19:40.239751 systemd[1]: var-lib-kubelet-pods-cb9cb441\x2d7037\x2d4528\x2d82fe\x2dbf285eddd3a8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 01:19:40.782237 systemd[1]: Removed slice kubepods-besteffort-podcb9cb441_7037_4528_82fe_bf285eddd3a8.slice - libcontainer container kubepods-besteffort-podcb9cb441_7037_4528_82fe_bf285eddd3a8.slice. Jan 20 01:19:40.861144 systemd[1]: Created slice kubepods-besteffort-pod55b2502a_51c1_4f19_87b3_fdc15037a275.slice - libcontainer container kubepods-besteffort-pod55b2502a_51c1_4f19_87b3_fdc15037a275.slice. Jan 20 01:19:40.959632 kubelet[3470]: I0120 01:19:40.959573 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9htls\" (UniqueName: \"kubernetes.io/projected/55b2502a-51c1-4f19-87b3-fdc15037a275-kube-api-access-9htls\") pod \"whisker-5f68b67c86-6bqpm\" (UID: \"55b2502a-51c1-4f19-87b3-fdc15037a275\") " pod="calico-system/whisker-5f68b67c86-6bqpm" Jan 20 01:19:40.960043 kubelet[3470]: I0120 01:19:40.959615 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55b2502a-51c1-4f19-87b3-fdc15037a275-whisker-ca-bundle\") pod \"whisker-5f68b67c86-6bqpm\" (UID: \"55b2502a-51c1-4f19-87b3-fdc15037a275\") " pod="calico-system/whisker-5f68b67c86-6bqpm" Jan 20 01:19:40.960043 kubelet[3470]: I0120 01:19:40.959991 3470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55b2502a-51c1-4f19-87b3-fdc15037a275-whisker-backend-key-pair\") pod \"whisker-5f68b67c86-6bqpm\" (UID: \"55b2502a-51c1-4f19-87b3-fdc15037a275\") " pod="calico-system/whisker-5f68b67c86-6bqpm" Jan 20 01:19:41.169004 containerd[1915]: time="2026-01-20T01:19:41.168770672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f68b67c86-6bqpm,Uid:55b2502a-51c1-4f19-87b3-fdc15037a275,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:41.671616 kubelet[3470]: I0120 01:19:41.671574 3470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9cb441-7037-4528-82fe-bf285eddd3a8" path="/var/lib/kubelet/pods/cb9cb441-7037-4528-82fe-bf285eddd3a8/volumes" Jan 20 01:19:41.807188 systemd-networkd[1493]: calia76993cb146: Link UP Jan 20 01:19:41.808334 systemd-networkd[1493]: calia76993cb146: Gained carrier Jan 20 01:19:41.830444 containerd[1915]: 2026-01-20 01:19:41.199 [INFO][4623] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:19:41.830444 containerd[1915]: 2026-01-20 01:19:41.385 [INFO][4623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0 whisker-5f68b67c86- calico-system 55b2502a-51c1-4f19-87b3-fdc15037a275 895 0 2026-01-20 01:19:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f68b67c86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-n-d40ac89f78 whisker-5f68b67c86-6bqpm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia76993cb146 [] [] }} ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Namespace="calico-system" Pod="whisker-5f68b67c86-6bqpm" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-" Jan 20 01:19:41.830444 containerd[1915]: 2026-01-20 01:19:41.385 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Namespace="calico-system" Pod="whisker-5f68b67c86-6bqpm" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" Jan 20 01:19:41.830444 containerd[1915]: 2026-01-20 01:19:41.422 [INFO][4654] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" HandleID="k8s-pod-network.35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Workload="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" Jan 20 01:19:41.830629 containerd[1915]: 2026-01-20 01:19:41.422 [INFO][4654] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" HandleID="k8s-pod-network.35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Workload="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-d40ac89f78", "pod":"whisker-5f68b67c86-6bqpm", "timestamp":"2026-01-20 01:19:41.422076633 +0000 UTC"}, Hostname:"ci-4459.2.2-n-d40ac89f78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:19:41.830629 containerd[1915]: 2026-01-20 01:19:41.422 [INFO][4654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:19:41.830629 containerd[1915]: 2026-01-20 01:19:41.422 [INFO][4654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:19:41.830629 containerd[1915]: 2026-01-20 01:19:41.422 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-d40ac89f78' Jan 20 01:19:41.830629 containerd[1915]: 2026-01-20 01:19:41.430 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:41.830629 containerd[1915]: 2026-01-20 01:19:41.434 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:41.830629 containerd[1915]: 2026-01-20 01:19:41.440 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:41.830629 containerd[1915]: 2026-01-20 01:19:41.441 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:41.830629 containerd[1915]: 2026-01-20 01:19:41.443 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:41.830760 containerd[1915]: 2026-01-20 01:19:41.443 [INFO][4654] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:41.830760 containerd[1915]: 2026-01-20 01:19:41.444 [INFO][4654] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03 Jan 20 01:19:41.830760 containerd[1915]: 2026-01-20 01:19:41.449 [INFO][4654] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:41.830760 containerd[1915]: 2026-01-20 01:19:41.457 [INFO][4654] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.1/26] block=192.168.54.0/26 handle="k8s-pod-network.35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:41.830760 containerd[1915]: 2026-01-20 01:19:41.457 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.1/26] handle="k8s-pod-network.35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:41.830760 containerd[1915]: 2026-01-20 01:19:41.457 [INFO][4654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:19:41.830760 containerd[1915]: 2026-01-20 01:19:41.457 [INFO][4654] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.1/26] IPv6=[] ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" HandleID="k8s-pod-network.35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Workload="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" Jan 20 01:19:41.830853 containerd[1915]: 2026-01-20 01:19:41.461 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Namespace="calico-system" Pod="whisker-5f68b67c86-6bqpm" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0", GenerateName:"whisker-5f68b67c86-", Namespace:"calico-system", SelfLink:"", UID:"55b2502a-51c1-4f19-87b3-fdc15037a275", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f68b67c86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"", Pod:"whisker-5f68b67c86-6bqpm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia76993cb146", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:41.830853 containerd[1915]: 2026-01-20 01:19:41.461 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.1/32] ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Namespace="calico-system" Pod="whisker-5f68b67c86-6bqpm" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" Jan 20 01:19:41.830897 containerd[1915]: 2026-01-20 01:19:41.461 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia76993cb146 ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Namespace="calico-system" Pod="whisker-5f68b67c86-6bqpm" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" Jan 20 01:19:41.830897 containerd[1915]: 2026-01-20 01:19:41.808 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Namespace="calico-system" Pod="whisker-5f68b67c86-6bqpm" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" Jan 20 01:19:41.830924 containerd[1915]: 2026-01-20 01:19:41.808 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Namespace="calico-system" Pod="whisker-5f68b67c86-6bqpm" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0", GenerateName:"whisker-5f68b67c86-", Namespace:"calico-system", SelfLink:"", UID:"55b2502a-51c1-4f19-87b3-fdc15037a275", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f68b67c86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03", Pod:"whisker-5f68b67c86-6bqpm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia76993cb146", MAC:"b6:bc:6f:ab:39:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:41.830955 containerd[1915]: 2026-01-20 01:19:41.822 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" Namespace="calico-system" Pod="whisker-5f68b67c86-6bqpm" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-whisker--5f68b67c86--6bqpm-eth0" Jan 20 01:19:41.887866 systemd-networkd[1493]: vxlan.calico: Link UP Jan 20 01:19:41.887873 systemd-networkd[1493]: vxlan.calico: Gained carrier Jan 20 01:19:42.240698 containerd[1915]: time="2026-01-20T01:19:42.240632671Z" level=info msg="connecting to shim 35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03" address="unix:///run/containerd/s/763b9a9a4d7d61cee01eb2a61cdb73b1ce0575e2f9da75dd8e5453d735139426" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:42.261615 systemd[1]: Started cri-containerd-35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03.scope - libcontainer container 35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03. Jan 20 01:19:42.292946 containerd[1915]: time="2026-01-20T01:19:42.292903401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f68b67c86-6bqpm,Uid:55b2502a-51c1-4f19-87b3-fdc15037a275,Namespace:calico-system,Attempt:0,} returns sandbox id \"35052743eb658709d4e893e02fa6c059bac1e9460e0c2250dfa37c34c3896b03\"" Jan 20 01:19:42.295009 containerd[1915]: time="2026-01-20T01:19:42.294969925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:19:42.562210 containerd[1915]: time="2026-01-20T01:19:42.562096600Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:42.564625 containerd[1915]: time="2026-01-20T01:19:42.564588312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:19:42.565049 containerd[1915]: time="2026-01-20T01:19:42.564656106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:19:42.565087 kubelet[3470]: E0120 01:19:42.564772 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:19:42.565087 kubelet[3470]: E0120 01:19:42.564814 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:19:42.565087 kubelet[3470]: E0120 01:19:42.564886 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5f68b67c86-6bqpm_calico-system(55b2502a-51c1-4f19-87b3-fdc15037a275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:42.565761 containerd[1915]: time="2026-01-20T01:19:42.565685704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:19:42.814676 containerd[1915]: time="2026-01-20T01:19:42.814560664Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:42.816793 containerd[1915]: time="2026-01-20T01:19:42.816763336Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:19:42.816941 containerd[1915]: time="2026-01-20T01:19:42.816821106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:19:42.816974 kubelet[3470]: E0120 01:19:42.816917 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:19:42.816974 kubelet[3470]: E0120 01:19:42.816950 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:19:42.817031 kubelet[3470]: E0120 01:19:42.817009 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5f68b67c86-6bqpm_calico-system(55b2502a-51c1-4f19-87b3-fdc15037a275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:42.817068 kubelet[3470]: E0120 01:19:42.817041 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:19:43.312661 systemd-networkd[1493]: vxlan.calico: Gained IPv6LL Jan 20 01:19:43.760621 systemd-networkd[1493]: calia76993cb146: Gained IPv6LL Jan 20 01:19:43.787256 kubelet[3470]: E0120 01:19:43.787217 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:19:45.665034 containerd[1915]: time="2026-01-20T01:19:45.664994050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2cs6z,Uid:8fa1e625-99ce-4678-80e8-ad10255fcf43,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:45.698819 kubelet[3470]: I0120 01:19:45.698785 3470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:19:45.762780 systemd-networkd[1493]: cali92a78b601bc: Link UP Jan 20 01:19:45.763334 systemd-networkd[1493]: cali92a78b601bc: Gained carrier Jan 20 01:19:45.787628 containerd[1915]: 2026-01-20 01:19:45.695 [INFO][4810] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0 csi-node-driver- calico-system 8fa1e625-99ce-4678-80e8-ad10255fcf43 717 0 2026-01-20 01:19:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-n-d40ac89f78 csi-node-driver-2cs6z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali92a78b601bc [] [] }} ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Namespace="calico-system" Pod="csi-node-driver-2cs6z" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-" Jan 20 01:19:45.787628 containerd[1915]: 2026-01-20 01:19:45.695 [INFO][4810] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Namespace="calico-system" Pod="csi-node-driver-2cs6z" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" Jan 20 01:19:45.787628 containerd[1915]: 2026-01-20 01:19:45.721 [INFO][4823] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" HandleID="k8s-pod-network.1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Workload="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" Jan 20 01:19:45.788044 containerd[1915]: 2026-01-20 01:19:45.721 [INFO][4823] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" HandleID="k8s-pod-network.1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Workload="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-d40ac89f78", "pod":"csi-node-driver-2cs6z", "timestamp":"2026-01-20 01:19:45.721613499 +0000 UTC"}, Hostname:"ci-4459.2.2-n-d40ac89f78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:19:45.788044 containerd[1915]: 2026-01-20 01:19:45.721 [INFO][4823] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:19:45.788044 containerd[1915]: 2026-01-20 01:19:45.721 [INFO][4823] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:19:45.788044 containerd[1915]: 2026-01-20 01:19:45.721 [INFO][4823] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-d40ac89f78' Jan 20 01:19:45.788044 containerd[1915]: 2026-01-20 01:19:45.729 [INFO][4823] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:45.788044 containerd[1915]: 2026-01-20 01:19:45.733 [INFO][4823] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:45.788044 containerd[1915]: 2026-01-20 01:19:45.737 [INFO][4823] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:45.788044 containerd[1915]: 2026-01-20 01:19:45.738 [INFO][4823] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:45.788044 containerd[1915]: 2026-01-20 01:19:45.740 [INFO][4823] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:45.788175 containerd[1915]: 2026-01-20 01:19:45.740 [INFO][4823] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:45.788175 containerd[1915]: 2026-01-20 01:19:45.742 [INFO][4823] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b Jan 20 01:19:45.788175 containerd[1915]: 2026-01-20 01:19:45.747 [INFO][4823] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:45.788175 containerd[1915]: 2026-01-20 01:19:45.752 [INFO][4823] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.2/26] block=192.168.54.0/26 handle="k8s-pod-network.1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:45.788175 containerd[1915]: 2026-01-20 01:19:45.752 [INFO][4823] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.2/26] handle="k8s-pod-network.1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:45.788175 containerd[1915]: 2026-01-20 01:19:45.752 [INFO][4823] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:19:45.788175 containerd[1915]: 2026-01-20 01:19:45.752 [INFO][4823] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.2/26] IPv6=[] ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" HandleID="k8s-pod-network.1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Workload="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" Jan 20 01:19:45.788263 containerd[1915]: 2026-01-20 01:19:45.757 [INFO][4810] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Namespace="calico-system" Pod="csi-node-driver-2cs6z" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fa1e625-99ce-4678-80e8-ad10255fcf43", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"", Pod:"csi-node-driver-2cs6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92a78b601bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:45.788297 containerd[1915]: 2026-01-20 01:19:45.757 [INFO][4810] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.2/32] ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Namespace="calico-system" Pod="csi-node-driver-2cs6z" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" Jan 20 01:19:45.788297 containerd[1915]: 2026-01-20 01:19:45.757 [INFO][4810] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92a78b601bc ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Namespace="calico-system" Pod="csi-node-driver-2cs6z" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" Jan 20 01:19:45.788297 containerd[1915]: 2026-01-20 01:19:45.764 [INFO][4810] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Namespace="calico-system" Pod="csi-node-driver-2cs6z" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" Jan 20 01:19:45.788338 containerd[1915]: 2026-01-20 01:19:45.764 [INFO][4810] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Namespace="calico-system" Pod="csi-node-driver-2cs6z" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8fa1e625-99ce-4678-80e8-ad10255fcf43", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b", Pod:"csi-node-driver-2cs6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92a78b601bc", MAC:"ca:fd:8e:4e:bd:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:45.788371 containerd[1915]: 2026-01-20 01:19:45.778 [INFO][4810] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" Namespace="calico-system" Pod="csi-node-driver-2cs6z" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-csi--node--driver--2cs6z-eth0" Jan 20 01:19:45.833747 containerd[1915]: time="2026-01-20T01:19:45.833685434Z" level=info msg="connecting to shim 1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b" address="unix:///run/containerd/s/d7ad27e9524345d2d75319c6d45e11fadd266adefaef5f1ccdb57ac8391d2662" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:45.856624 systemd[1]: Started cri-containerd-1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b.scope - libcontainer container 1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b. Jan 20 01:19:45.890033 containerd[1915]: time="2026-01-20T01:19:45.889849030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2cs6z,Uid:8fa1e625-99ce-4678-80e8-ad10255fcf43,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ef3d76c5a6ef42f657150bcb0688fecf45d922e95361623abe2415b78ad0e5b\"" Jan 20 01:19:45.891844 containerd[1915]: time="2026-01-20T01:19:45.891795647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:19:46.143316 containerd[1915]: time="2026-01-20T01:19:46.143271866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:46.149589 containerd[1915]: time="2026-01-20T01:19:46.149555097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:19:46.149693 containerd[1915]: time="2026-01-20T01:19:46.149629163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:19:46.149848 kubelet[3470]: E0120 01:19:46.149817 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:19:46.149945 kubelet[3470]: E0120 01:19:46.149931 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:19:46.150144 kubelet[3470]: E0120 01:19:46.150075 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:46.151858 containerd[1915]: time="2026-01-20T01:19:46.151841603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:19:46.392189 containerd[1915]: time="2026-01-20T01:19:46.392153394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:46.395352 containerd[1915]: time="2026-01-20T01:19:46.395258340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:19:46.395352 containerd[1915]: time="2026-01-20T01:19:46.395324406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:19:46.395906 kubelet[3470]: E0120 01:19:46.395876 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:19:46.395983 kubelet[3470]: E0120 01:19:46.395970 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:19:46.396093 kubelet[3470]: E0120 01:19:46.396077 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:46.396214 kubelet[3470]: E0120 01:19:46.396192 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:19:46.663641 containerd[1915]: time="2026-01-20T01:19:46.663553263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d584bf5d7-gtntj,Uid:07b4036f-53cd-480d-a4eb-8badfec721c3,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:46.667063 containerd[1915]: time="2026-01-20T01:19:46.666991812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6496599-bdfgs,Uid:b0ce4d6d-0160-4871-9c3a-73730559c915,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:19:46.763820 systemd-networkd[1493]: cali0574cc3573b: Link UP Jan 20 01:19:46.764835 systemd-networkd[1493]: cali0574cc3573b: Gained carrier Jan 20 01:19:46.779754 containerd[1915]: 2026-01-20 01:19:46.704 [INFO][4933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0 calico-kube-controllers-7d584bf5d7- calico-system 07b4036f-53cd-480d-a4eb-8badfec721c3 821 0 2026-01-20 01:19:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d584bf5d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-n-d40ac89f78 calico-kube-controllers-7d584bf5d7-gtntj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0574cc3573b [] [] }} ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Namespace="calico-system" Pod="calico-kube-controllers-7d584bf5d7-gtntj" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-" Jan 20 01:19:46.779754 containerd[1915]: 2026-01-20 01:19:46.704 [INFO][4933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Namespace="calico-system" Pod="calico-kube-controllers-7d584bf5d7-gtntj" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" Jan 20 01:19:46.779754 containerd[1915]: 2026-01-20 01:19:46.725 [INFO][4959] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" HandleID="k8s-pod-network.e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" Jan 20 01:19:46.779985 containerd[1915]: 2026-01-20 01:19:46.725 [INFO][4959] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" HandleID="k8s-pod-network.e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-d40ac89f78", "pod":"calico-kube-controllers-7d584bf5d7-gtntj", "timestamp":"2026-01-20 01:19:46.725174379 +0000 UTC"}, Hostname:"ci-4459.2.2-n-d40ac89f78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:19:46.779985 containerd[1915]: 2026-01-20 01:19:46.725 [INFO][4959] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:19:46.779985 containerd[1915]: 2026-01-20 01:19:46.725 [INFO][4959] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:19:46.779985 containerd[1915]: 2026-01-20 01:19:46.725 [INFO][4959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-d40ac89f78' Jan 20 01:19:46.779985 containerd[1915]: 2026-01-20 01:19:46.730 [INFO][4959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.779985 containerd[1915]: 2026-01-20 01:19:46.734 [INFO][4959] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.779985 containerd[1915]: 2026-01-20 01:19:46.738 [INFO][4959] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.779985 containerd[1915]: 2026-01-20 01:19:46.740 [INFO][4959] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.779985 containerd[1915]: 2026-01-20 01:19:46.741 [INFO][4959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.780192 containerd[1915]: 2026-01-20 01:19:46.742 [INFO][4959] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.780192 containerd[1915]: 2026-01-20 01:19:46.743 [INFO][4959] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9 Jan 20 01:19:46.780192 containerd[1915]: 2026-01-20 01:19:46.747 [INFO][4959] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.780192 containerd[1915]: 2026-01-20 01:19:46.755 [INFO][4959] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.3/26] block=192.168.54.0/26 handle="k8s-pod-network.e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.780192 containerd[1915]: 2026-01-20 01:19:46.756 [INFO][4959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.3/26] handle="k8s-pod-network.e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.780192 containerd[1915]: 2026-01-20 01:19:46.756 [INFO][4959] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:19:46.780192 containerd[1915]: 2026-01-20 01:19:46.756 [INFO][4959] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.3/26] IPv6=[] ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" HandleID="k8s-pod-network.e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" Jan 20 01:19:46.780315 containerd[1915]: 2026-01-20 01:19:46.758 [INFO][4933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Namespace="calico-system" Pod="calico-kube-controllers-7d584bf5d7-gtntj" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0", GenerateName:"calico-kube-controllers-7d584bf5d7-", Namespace:"calico-system", SelfLink:"", UID:"07b4036f-53cd-480d-a4eb-8badfec721c3", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d584bf5d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"", Pod:"calico-kube-controllers-7d584bf5d7-gtntj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0574cc3573b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:46.780475 containerd[1915]: 2026-01-20 01:19:46.758 [INFO][4933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.3/32] ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Namespace="calico-system" Pod="calico-kube-controllers-7d584bf5d7-gtntj" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" Jan 20 01:19:46.780475 containerd[1915]: 2026-01-20 01:19:46.758 [INFO][4933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0574cc3573b ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Namespace="calico-system" Pod="calico-kube-controllers-7d584bf5d7-gtntj" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" Jan 20 01:19:46.780475 containerd[1915]: 2026-01-20 01:19:46.765 [INFO][4933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Namespace="calico-system" Pod="calico-kube-controllers-7d584bf5d7-gtntj" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" Jan 20 01:19:46.780562 containerd[1915]: 2026-01-20 01:19:46.766 [INFO][4933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Namespace="calico-system" Pod="calico-kube-controllers-7d584bf5d7-gtntj" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0", GenerateName:"calico-kube-controllers-7d584bf5d7-", Namespace:"calico-system", SelfLink:"", UID:"07b4036f-53cd-480d-a4eb-8badfec721c3", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d584bf5d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9", Pod:"calico-kube-controllers-7d584bf5d7-gtntj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0574cc3573b", MAC:"72:8a:09:8e:3f:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:46.780600 containerd[1915]: 2026-01-20 01:19:46.777 [INFO][4933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" Namespace="calico-system" Pod="calico-kube-controllers-7d584bf5d7-gtntj" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--kube--controllers--7d584bf5d7--gtntj-eth0" Jan 20 01:19:46.803973 kubelet[3470]: E0120 01:19:46.803933 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:19:46.822386 containerd[1915]: time="2026-01-20T01:19:46.822308935Z" level=info msg="connecting to shim e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9" address="unix:///run/containerd/s/69786e6cfedf594ece5a9566614e4ebc4e3bc5aa5a337316ebbd264c03cf6ea4" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:46.850708 systemd[1]: Started cri-containerd-e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9.scope - libcontainer container e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9. Jan 20 01:19:46.885268 systemd-networkd[1493]: calicae9077bffd: Link UP Jan 20 01:19:46.892525 systemd-networkd[1493]: calicae9077bffd: Gained carrier Jan 20 01:19:46.908434 containerd[1915]: time="2026-01-20T01:19:46.908409305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d584bf5d7-gtntj,Uid:07b4036f-53cd-480d-a4eb-8badfec721c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"e25b912c5fafe53d2baaf26a5400586b32e75104845d38633fdab1dc4fa637f9\"" Jan 20 01:19:46.912170 containerd[1915]: time="2026-01-20T01:19:46.912150838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:19:46.914974 containerd[1915]: 2026-01-20 01:19:46.704 [INFO][4943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0 calico-apiserver-84b6496599- calico-apiserver b0ce4d6d-0160-4871-9c3a-73730559c915 818 0 2026-01-20 01:19:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b6496599 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-n-d40ac89f78 calico-apiserver-84b6496599-bdfgs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicae9077bffd [] [] }} ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-bdfgs" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-" Jan 20 01:19:46.914974 containerd[1915]: 2026-01-20 01:19:46.704 [INFO][4943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-bdfgs" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" Jan 20 01:19:46.914974 containerd[1915]: 2026-01-20 01:19:46.727 [INFO][4957] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" HandleID="k8s-pod-network.908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" Jan 20 01:19:46.915106 containerd[1915]: 2026-01-20 01:19:46.727 [INFO][4957] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" HandleID="k8s-pod-network.908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b0bc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-n-d40ac89f78", "pod":"calico-apiserver-84b6496599-bdfgs", "timestamp":"2026-01-20 01:19:46.727057726 +0000 UTC"}, Hostname:"ci-4459.2.2-n-d40ac89f78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:19:46.915106 containerd[1915]: 2026-01-20 01:19:46.727 [INFO][4957] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:19:46.915106 containerd[1915]: 2026-01-20 01:19:46.756 [INFO][4957] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:19:46.915106 containerd[1915]: 2026-01-20 01:19:46.756 [INFO][4957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-d40ac89f78' Jan 20 01:19:46.915106 containerd[1915]: 2026-01-20 01:19:46.836 [INFO][4957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.915106 containerd[1915]: 2026-01-20 01:19:46.845 [INFO][4957] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.915106 containerd[1915]: 2026-01-20 01:19:46.852 [INFO][4957] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.915106 containerd[1915]: 2026-01-20 01:19:46.855 [INFO][4957] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.915106 containerd[1915]: 2026-01-20 01:19:46.858 [INFO][4957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.915242 containerd[1915]: 2026-01-20 01:19:46.858 [INFO][4957] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.915242 containerd[1915]: 2026-01-20 01:19:46.860 [INFO][4957] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333 Jan 20 01:19:46.915242 containerd[1915]: 2026-01-20 01:19:46.865 [INFO][4957] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.915242 containerd[1915]: 2026-01-20 01:19:46.875 [INFO][4957] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.4/26] block=192.168.54.0/26 handle="k8s-pod-network.908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.915242 containerd[1915]: 2026-01-20 01:19:46.875 [INFO][4957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.4/26] handle="k8s-pod-network.908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:46.915242 containerd[1915]: 2026-01-20 01:19:46.875 [INFO][4957] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:19:46.915242 containerd[1915]: 2026-01-20 01:19:46.875 [INFO][4957] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.4/26] IPv6=[] ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" HandleID="k8s-pod-network.908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" Jan 20 01:19:46.915334 containerd[1915]: 2026-01-20 01:19:46.877 [INFO][4943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-bdfgs" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0", GenerateName:"calico-apiserver-84b6496599-", Namespace:"calico-apiserver", SelfLink:"", UID:"b0ce4d6d-0160-4871-9c3a-73730559c915", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6496599", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"", Pod:"calico-apiserver-84b6496599-bdfgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicae9077bffd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:46.915369 containerd[1915]: 2026-01-20 01:19:46.877 [INFO][4943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.4/32] ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-bdfgs" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" Jan 20 01:19:46.915369 containerd[1915]: 2026-01-20 01:19:46.877 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicae9077bffd ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-bdfgs" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" Jan 20 01:19:46.915369 containerd[1915]: 2026-01-20 01:19:46.893 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-bdfgs" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" Jan 20 01:19:46.915412 containerd[1915]: 2026-01-20 01:19:46.894 [INFO][4943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-bdfgs" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0", GenerateName:"calico-apiserver-84b6496599-", Namespace:"calico-apiserver", SelfLink:"", UID:"b0ce4d6d-0160-4871-9c3a-73730559c915", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6496599", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333", Pod:"calico-apiserver-84b6496599-bdfgs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicae9077bffd", MAC:"d6:8f:56:cc:68:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:46.915445 containerd[1915]: 2026-01-20 01:19:46.910 [INFO][4943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-bdfgs" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--bdfgs-eth0" Jan 20 01:19:46.957738 containerd[1915]: time="2026-01-20T01:19:46.957695183Z" level=info msg="connecting to shim 908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333" address="unix:///run/containerd/s/97b13b88166e1dc3f66378aeade5bd482f7a0b3ff2cb4bb83f3d834ebf252206" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:46.979622 systemd[1]: Started cri-containerd-908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333.scope - libcontainer container 908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333. Jan 20 01:19:47.022225 containerd[1915]: time="2026-01-20T01:19:47.022144911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6496599-bdfgs,Uid:b0ce4d6d-0160-4871-9c3a-73730559c915,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"908e9a84f75091785e8286e2ff7e7b0370ccd3ce814ac5da8527ad97e49bb333\"" Jan 20 01:19:47.173144 containerd[1915]: time="2026-01-20T01:19:47.173025770Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:47.176009 containerd[1915]: time="2026-01-20T01:19:47.175928624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:19:47.176009 containerd[1915]: time="2026-01-20T01:19:47.175982538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:19:47.176171 kubelet[3470]: E0120 01:19:47.176132 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:19:47.176219 kubelet[3470]: E0120 01:19:47.176180 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:19:47.176405 kubelet[3470]: E0120 01:19:47.176350 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d584bf5d7-gtntj_calico-system(07b4036f-53cd-480d-a4eb-8badfec721c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:47.177003 containerd[1915]: time="2026-01-20T01:19:47.176654804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:19:47.177044 kubelet[3470]: E0120 01:19:47.177010 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:19:47.432762 containerd[1915]: time="2026-01-20T01:19:47.432636843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:47.434862 containerd[1915]: time="2026-01-20T01:19:47.434770701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:19:47.434862 containerd[1915]: time="2026-01-20T01:19:47.434827095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:19:47.435091 kubelet[3470]: E0120 01:19:47.435043 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:19:47.435593 kubelet[3470]: E0120 01:19:47.435549 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:19:47.435965 kubelet[3470]: E0120 01:19:47.435726 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84b6496599-bdfgs_calico-apiserver(b0ce4d6d-0160-4871-9c3a-73730559c915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:47.435965 kubelet[3470]: E0120 01:19:47.435934 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:19:47.472701 systemd-networkd[1493]: cali92a78b601bc: Gained IPv6LL Jan 20 01:19:47.807825 kubelet[3470]: E0120 01:19:47.806902 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:19:47.808529 kubelet[3470]: E0120 01:19:47.808363 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:19:47.808828 kubelet[3470]: E0120 01:19:47.808703 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:19:47.857656 systemd-networkd[1493]: cali0574cc3573b: Gained IPv6LL Jan 20 01:19:48.112710 systemd-networkd[1493]: calicae9077bffd: Gained IPv6LL Jan 20 01:19:48.664791 containerd[1915]: time="2026-01-20T01:19:48.664734313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ltwrz,Uid:e017f9ec-f165-40df-8583-e2bb3341b01b,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:48.756585 systemd-networkd[1493]: cali84473b47c03: Link UP Jan 20 01:19:48.756736 systemd-networkd[1493]: cali84473b47c03: Gained carrier Jan 20 01:19:48.772617 containerd[1915]: 2026-01-20 01:19:48.697 [INFO][5091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0 coredns-66bc5c9577- kube-system e017f9ec-f165-40df-8583-e2bb3341b01b 822 0 2026-01-20 01:19:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-n-d40ac89f78 coredns-66bc5c9577-ltwrz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84473b47c03 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Namespace="kube-system" Pod="coredns-66bc5c9577-ltwrz" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-" Jan 20 01:19:48.772617 containerd[1915]: 2026-01-20 01:19:48.697 [INFO][5091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Namespace="kube-system" Pod="coredns-66bc5c9577-ltwrz" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" Jan 20 01:19:48.772617 containerd[1915]: 2026-01-20 01:19:48.716 [INFO][5104] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" HandleID="k8s-pod-network.9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Workload="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" Jan 20 01:19:48.772759 containerd[1915]: 2026-01-20 01:19:48.716 [INFO][5104] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" HandleID="k8s-pod-network.9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Workload="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-n-d40ac89f78", "pod":"coredns-66bc5c9577-ltwrz", "timestamp":"2026-01-20 01:19:48.716788546 +0000 UTC"}, Hostname:"ci-4459.2.2-n-d40ac89f78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:19:48.772759 containerd[1915]: 2026-01-20 01:19:48.717 [INFO][5104] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:19:48.772759 containerd[1915]: 2026-01-20 01:19:48.717 [INFO][5104] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:19:48.772759 containerd[1915]: 2026-01-20 01:19:48.717 [INFO][5104] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-d40ac89f78' Jan 20 01:19:48.772759 containerd[1915]: 2026-01-20 01:19:48.723 [INFO][5104] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:48.772759 containerd[1915]: 2026-01-20 01:19:48.728 [INFO][5104] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:48.772759 containerd[1915]: 2026-01-20 01:19:48.732 [INFO][5104] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:48.772759 containerd[1915]: 2026-01-20 01:19:48.733 [INFO][5104] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:48.772759 containerd[1915]: 2026-01-20 01:19:48.735 [INFO][5104] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:48.772901 containerd[1915]: 2026-01-20 01:19:48.735 [INFO][5104] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:48.772901 containerd[1915]: 2026-01-20 01:19:48.736 [INFO][5104] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031 Jan 20 01:19:48.772901 containerd[1915]: 2026-01-20 01:19:48.744 [INFO][5104] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:48.772901 containerd[1915]: 2026-01-20 01:19:48.750 [INFO][5104] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.5/26] block=192.168.54.0/26 handle="k8s-pod-network.9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:48.772901 containerd[1915]: 2026-01-20 01:19:48.750 [INFO][5104] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.5/26] handle="k8s-pod-network.9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:48.772901 containerd[1915]: 2026-01-20 01:19:48.750 [INFO][5104] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:19:48.772901 containerd[1915]: 2026-01-20 01:19:48.750 [INFO][5104] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.5/26] IPv6=[] ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" HandleID="k8s-pod-network.9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Workload="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" Jan 20 01:19:48.772993 containerd[1915]: 2026-01-20 01:19:48.753 [INFO][5091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Namespace="kube-system" Pod="coredns-66bc5c9577-ltwrz" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e017f9ec-f165-40df-8583-e2bb3341b01b", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"", Pod:"coredns-66bc5c9577-ltwrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84473b47c03", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:48.772993 containerd[1915]: 2026-01-20 01:19:48.753 [INFO][5091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.5/32] ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Namespace="kube-system" Pod="coredns-66bc5c9577-ltwrz" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" Jan 20 01:19:48.772993 containerd[1915]: 2026-01-20 01:19:48.753 [INFO][5091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84473b47c03 ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Namespace="kube-system" Pod="coredns-66bc5c9577-ltwrz" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" Jan 20 01:19:48.772993 containerd[1915]: 2026-01-20 01:19:48.755 [INFO][5091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Namespace="kube-system" Pod="coredns-66bc5c9577-ltwrz" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" Jan 20 01:19:48.772993 containerd[1915]: 2026-01-20 01:19:48.756 [INFO][5091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Namespace="kube-system" Pod="coredns-66bc5c9577-ltwrz" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e017f9ec-f165-40df-8583-e2bb3341b01b", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031", Pod:"coredns-66bc5c9577-ltwrz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84473b47c03", MAC:"16:5f:94:72:50:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:48.773112 containerd[1915]: 2026-01-20 01:19:48.769 [INFO][5091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" Namespace="kube-system" Pod="coredns-66bc5c9577-ltwrz" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--ltwrz-eth0" Jan 20 01:19:48.814370 kubelet[3470]: E0120 01:19:48.813292 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:19:48.814370 kubelet[3470]: E0120 01:19:48.813451 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:19:48.815822 containerd[1915]: time="2026-01-20T01:19:48.813913030Z" level=info msg="connecting to shim 9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031" address="unix:///run/containerd/s/aa053d12221e5dc490c5dc426e9d39431f9b0fa009a41dda0719f97458868dd8" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:48.848612 systemd[1]: Started cri-containerd-9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031.scope - libcontainer container 9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031. Jan 20 01:19:48.884877 containerd[1915]: time="2026-01-20T01:19:48.884841717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ltwrz,Uid:e017f9ec-f165-40df-8583-e2bb3341b01b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031\"" Jan 20 01:19:48.892546 containerd[1915]: time="2026-01-20T01:19:48.892518445Z" level=info msg="CreateContainer within sandbox \"9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:19:48.911936 containerd[1915]: time="2026-01-20T01:19:48.911591721Z" level=info msg="Container 9bf27c88a5fd9b7b88768d0ad30b12fac9e6b182acef5049ed2ad75e4ab3d42d: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:48.923809 containerd[1915]: time="2026-01-20T01:19:48.923694993Z" level=info msg="CreateContainer within sandbox \"9fead74b3c1d7da843af1fad22aa2617d2ba156aadd3393e53261013e02ad031\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bf27c88a5fd9b7b88768d0ad30b12fac9e6b182acef5049ed2ad75e4ab3d42d\"" Jan 20 01:19:48.924778 containerd[1915]: time="2026-01-20T01:19:48.924461813Z" level=info msg="StartContainer for \"9bf27c88a5fd9b7b88768d0ad30b12fac9e6b182acef5049ed2ad75e4ab3d42d\"" Jan 20 01:19:48.925972 containerd[1915]: time="2026-01-20T01:19:48.925921829Z" level=info msg="connecting to shim 9bf27c88a5fd9b7b88768d0ad30b12fac9e6b182acef5049ed2ad75e4ab3d42d" address="unix:///run/containerd/s/aa053d12221e5dc490c5dc426e9d39431f9b0fa009a41dda0719f97458868dd8" protocol=ttrpc version=3 Jan 20 01:19:48.943623 systemd[1]: Started cri-containerd-9bf27c88a5fd9b7b88768d0ad30b12fac9e6b182acef5049ed2ad75e4ab3d42d.scope - libcontainer container 9bf27c88a5fd9b7b88768d0ad30b12fac9e6b182acef5049ed2ad75e4ab3d42d. Jan 20 01:19:48.972587 containerd[1915]: time="2026-01-20T01:19:48.972561683Z" level=info msg="StartContainer for \"9bf27c88a5fd9b7b88768d0ad30b12fac9e6b182acef5049ed2ad75e4ab3d42d\" returns successfully" Jan 20 01:19:49.663674 containerd[1915]: time="2026-01-20T01:19:49.663633432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6gnsc,Uid:e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f,Namespace:kube-system,Attempt:0,}" Jan 20 01:19:49.669839 containerd[1915]: time="2026-01-20T01:19:49.669809135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7465b9f86b-8bs6q,Uid:e522ab69-60c4-4bed-bd35-afe9cd973ba9,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:19:49.673212 containerd[1915]: time="2026-01-20T01:19:49.671961650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6496599-7q94c,Uid:b6739b36-c8fc-46a6-8652-d6a5a25da0c2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:19:49.678333 containerd[1915]: time="2026-01-20T01:19:49.678315206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dwz6j,Uid:394686c1-b41d-41ec-8fb4-e2ecac3e5f25,Namespace:calico-system,Attempt:0,}" Jan 20 01:19:49.843979 systemd-networkd[1493]: cali7ce2d34e066: Link UP Jan 20 01:19:49.845683 systemd-networkd[1493]: cali7ce2d34e066: Gained carrier Jan 20 01:19:49.864523 kubelet[3470]: I0120 01:19:49.864141 3470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ltwrz" podStartSLOduration=39.86412741 podStartE2EDuration="39.86412741s" podCreationTimestamp="2026-01-20 01:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:49.842352708 +0000 UTC m=+44.290666317" watchObservedRunningTime="2026-01-20 01:19:49.86412741 +0000 UTC m=+44.312440979" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.734 [INFO][5199] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0 coredns-66bc5c9577- kube-system e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f 819 0 2026-01-20 01:19:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-n-d40ac89f78 coredns-66bc5c9577-6gnsc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ce2d34e066 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Namespace="kube-system" Pod="coredns-66bc5c9577-6gnsc" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.734 [INFO][5199] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Namespace="kube-system" Pod="coredns-66bc5c9577-6gnsc" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.780 [INFO][5249] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" HandleID="k8s-pod-network.0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Workload="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.780 [INFO][5249] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" HandleID="k8s-pod-network.0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Workload="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b1080), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-n-d40ac89f78", "pod":"coredns-66bc5c9577-6gnsc", "timestamp":"2026-01-20 01:19:49.780862141 +0000 UTC"}, Hostname:"ci-4459.2.2-n-d40ac89f78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.780 [INFO][5249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.781 [INFO][5249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.781 [INFO][5249] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-d40ac89f78' Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.793 [INFO][5249] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.799 [INFO][5249] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.803 [INFO][5249] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.805 [INFO][5249] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.807 [INFO][5249] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.808 [INFO][5249] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.809 [INFO][5249] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6 Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.817 [INFO][5249] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.835 [INFO][5249] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.6/26] block=192.168.54.0/26 handle="k8s-pod-network.0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.835 [INFO][5249] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.6/26] handle="k8s-pod-network.0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.836 [INFO][5249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:19:49.866845 containerd[1915]: 2026-01-20 01:19:49.836 [INFO][5249] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.6/26] IPv6=[] ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" HandleID="k8s-pod-network.0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Workload="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" Jan 20 01:19:49.867211 containerd[1915]: 2026-01-20 01:19:49.838 [INFO][5199] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Namespace="kube-system" Pod="coredns-66bc5c9577-6gnsc" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"", Pod:"coredns-66bc5c9577-6gnsc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ce2d34e066", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:49.867211 containerd[1915]: 2026-01-20 01:19:49.838 [INFO][5199] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.6/32] ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Namespace="kube-system" Pod="coredns-66bc5c9577-6gnsc" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" Jan 20 01:19:49.867211 containerd[1915]: 2026-01-20 01:19:49.838 [INFO][5199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ce2d34e066 ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Namespace="kube-system" Pod="coredns-66bc5c9577-6gnsc" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" Jan 20 01:19:49.867211 containerd[1915]: 2026-01-20 01:19:49.843 [INFO][5199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Namespace="kube-system" Pod="coredns-66bc5c9577-6gnsc" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" Jan 20 01:19:49.867211 containerd[1915]: 2026-01-20 01:19:49.846 [INFO][5199] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Namespace="kube-system" Pod="coredns-66bc5c9577-6gnsc" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6", Pod:"coredns-66bc5c9577-6gnsc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ce2d34e066", MAC:"3e:be:9e:98:42:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:49.868052 containerd[1915]: 2026-01-20 01:19:49.863 [INFO][5199] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" Namespace="kube-system" Pod="coredns-66bc5c9577-6gnsc" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-coredns--66bc5c9577--6gnsc-eth0" Jan 20 01:19:49.900696 containerd[1915]: time="2026-01-20T01:19:49.900660846Z" level=info msg="connecting to shim 0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6" address="unix:///run/containerd/s/8f2d30150dd5cea122945ab9271616caa85ba350dccdb33771f774276e35a4d7" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:49.922646 systemd[1]: Started cri-containerd-0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6.scope - libcontainer container 0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6. Jan 20 01:19:49.947243 systemd-networkd[1493]: cali8c85411d93c: Link UP Jan 20 01:19:49.947387 systemd-networkd[1493]: cali8c85411d93c: Gained carrier Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.764 [INFO][5228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0 goldmane-7c778bb748- calico-system 394686c1-b41d-41ec-8fb4-e2ecac3e5f25 825 0 2026-01-20 01:19:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-n-d40ac89f78 goldmane-7c778bb748-dwz6j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8c85411d93c [] [] }} ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Namespace="calico-system" Pod="goldmane-7c778bb748-dwz6j" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.765 [INFO][5228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Namespace="calico-system" Pod="goldmane-7c778bb748-dwz6j" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.805 [INFO][5266] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" HandleID="k8s-pod-network.dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Workload="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.805 [INFO][5266] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" HandleID="k8s-pod-network.dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Workload="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dcfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-n-d40ac89f78", "pod":"goldmane-7c778bb748-dwz6j", "timestamp":"2026-01-20 01:19:49.805540472 +0000 UTC"}, Hostname:"ci-4459.2.2-n-d40ac89f78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.805 [INFO][5266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.835 [INFO][5266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.835 [INFO][5266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-d40ac89f78' Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.892 [INFO][5266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.901 [INFO][5266] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.908 [INFO][5266] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.912 [INFO][5266] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.922 [INFO][5266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.922 [INFO][5266] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.924 [INFO][5266] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.929 [INFO][5266] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.940 [INFO][5266] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.7/26] block=192.168.54.0/26 handle="k8s-pod-network.dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.940 [INFO][5266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.7/26] handle="k8s-pod-network.dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.941 [INFO][5266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:19:49.963308 containerd[1915]: 2026-01-20 01:19:49.941 [INFO][5266] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.7/26] IPv6=[] ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" HandleID="k8s-pod-network.dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Workload="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" Jan 20 01:19:49.964138 containerd[1915]: 2026-01-20 01:19:49.942 [INFO][5228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Namespace="calico-system" Pod="goldmane-7c778bb748-dwz6j" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"394686c1-b41d-41ec-8fb4-e2ecac3e5f25", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"", Pod:"goldmane-7c778bb748-dwz6j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8c85411d93c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:49.964138 containerd[1915]: 2026-01-20 01:19:49.942 [INFO][5228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.7/32] ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Namespace="calico-system" Pod="goldmane-7c778bb748-dwz6j" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" Jan 20 01:19:49.964138 containerd[1915]: 2026-01-20 01:19:49.942 [INFO][5228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c85411d93c ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Namespace="calico-system" Pod="goldmane-7c778bb748-dwz6j" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" Jan 20 01:19:49.964138 containerd[1915]: 2026-01-20 01:19:49.944 [INFO][5228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Namespace="calico-system" Pod="goldmane-7c778bb748-dwz6j" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" Jan 20 01:19:49.964138 containerd[1915]: 2026-01-20 01:19:49.944 [INFO][5228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Namespace="calico-system" Pod="goldmane-7c778bb748-dwz6j" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"394686c1-b41d-41ec-8fb4-e2ecac3e5f25", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c", Pod:"goldmane-7c778bb748-dwz6j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8c85411d93c", MAC:"6e:ae:25:ac:20:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:49.964138 containerd[1915]: 2026-01-20 01:19:49.960 [INFO][5228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" Namespace="calico-system" Pod="goldmane-7c778bb748-dwz6j" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-goldmane--7c778bb748--dwz6j-eth0" Jan 20 01:19:49.978270 containerd[1915]: time="2026-01-20T01:19:49.977233910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6gnsc,Uid:e0b6976f-5bf9-4b17-8633-cc09ab7ecd6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6\"" Jan 20 01:19:49.985912 containerd[1915]: time="2026-01-20T01:19:49.985699355Z" level=info msg="CreateContainer within sandbox \"0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:19:50.016754 containerd[1915]: time="2026-01-20T01:19:50.016673330Z" level=info msg="connecting to shim dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c" address="unix:///run/containerd/s/a919a8b6fac55efdbc164579f987e5d47b06582e6f4507ff9ea7f7d309e21368" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:50.019573 containerd[1915]: time="2026-01-20T01:19:50.019534599Z" level=info msg="Container e35ee03b7bfdd8ba857f4d64e3082349b5efa4e897ef3efa9eb24d53e7311012: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:19:50.037219 containerd[1915]: time="2026-01-20T01:19:50.037189533Z" level=info msg="CreateContainer within sandbox \"0ec3d2ae48db861fb7ca0a30174794e650d851bfded8b6acceb8adb64198b4a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e35ee03b7bfdd8ba857f4d64e3082349b5efa4e897ef3efa9eb24d53e7311012\"" Jan 20 01:19:50.038728 containerd[1915]: time="2026-01-20T01:19:50.038654453Z" level=info msg="StartContainer for \"e35ee03b7bfdd8ba857f4d64e3082349b5efa4e897ef3efa9eb24d53e7311012\"" Jan 20 01:19:50.039211 containerd[1915]: time="2026-01-20T01:19:50.039184851Z" level=info msg="connecting to shim e35ee03b7bfdd8ba857f4d64e3082349b5efa4e897ef3efa9eb24d53e7311012" address="unix:///run/containerd/s/8f2d30150dd5cea122945ab9271616caa85ba350dccdb33771f774276e35a4d7" protocol=ttrpc version=3 Jan 20 01:19:50.053058 systemd[1]: Started cri-containerd-dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c.scope - libcontainer container dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c. Jan 20 01:19:50.068628 systemd[1]: Started cri-containerd-e35ee03b7bfdd8ba857f4d64e3082349b5efa4e897ef3efa9eb24d53e7311012.scope - libcontainer container e35ee03b7bfdd8ba857f4d64e3082349b5efa4e897ef3efa9eb24d53e7311012. Jan 20 01:19:50.069656 systemd-networkd[1493]: cali5e3824aa376: Link UP Jan 20 01:19:50.069799 systemd-networkd[1493]: cali5e3824aa376: Gained carrier Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:49.765 [INFO][5223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0 calico-apiserver-84b6496599- calico-apiserver b6739b36-c8fc-46a6-8652-d6a5a25da0c2 826 0 2026-01-20 01:19:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b6496599 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-n-d40ac89f78 calico-apiserver-84b6496599-7q94c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5e3824aa376 [] [] }} ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-7q94c" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:49.765 [INFO][5223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-7q94c" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:49.807 [INFO][5258] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" HandleID="k8s-pod-network.6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:49.807 [INFO][5258] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" HandleID="k8s-pod-network.6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c9030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-n-d40ac89f78", "pod":"calico-apiserver-84b6496599-7q94c", "timestamp":"2026-01-20 01:19:49.807348217 +0000 UTC"}, Hostname:"ci-4459.2.2-n-d40ac89f78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:49.807 [INFO][5258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:49.941 [INFO][5258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:49.941 [INFO][5258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-d40ac89f78' Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:49.992 [INFO][5258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:49.999 [INFO][5258] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.011 [INFO][5258] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.016 [INFO][5258] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.020 [INFO][5258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.020 [INFO][5258] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.029 [INFO][5258] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23 Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.037 [INFO][5258] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.058 [INFO][5258] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.8/26] block=192.168.54.0/26 handle="k8s-pod-network.6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.059 [INFO][5258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.8/26] handle="k8s-pod-network.6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.059 [INFO][5258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:19:50.099716 containerd[1915]: 2026-01-20 01:19:50.059 [INFO][5258] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.8/26] IPv6=[] ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" HandleID="k8s-pod-network.6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" Jan 20 01:19:50.100074 containerd[1915]: 2026-01-20 01:19:50.063 [INFO][5223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-7q94c" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0", GenerateName:"calico-apiserver-84b6496599-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6739b36-c8fc-46a6-8652-d6a5a25da0c2", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6496599", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"", Pod:"calico-apiserver-84b6496599-7q94c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e3824aa376", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:50.100074 containerd[1915]: 2026-01-20 01:19:50.063 [INFO][5223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.8/32] ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-7q94c" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" Jan 20 01:19:50.100074 containerd[1915]: 2026-01-20 01:19:50.063 [INFO][5223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e3824aa376 ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-7q94c" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" Jan 20 01:19:50.100074 containerd[1915]: 2026-01-20 01:19:50.069 [INFO][5223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-7q94c" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" Jan 20 01:19:50.100074 containerd[1915]: 2026-01-20 01:19:50.074 [INFO][5223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-7q94c" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0", GenerateName:"calico-apiserver-84b6496599-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6739b36-c8fc-46a6-8652-d6a5a25da0c2", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b6496599", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23", Pod:"calico-apiserver-84b6496599-7q94c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e3824aa376", MAC:"ce:50:7f:ed:ff:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:50.100074 containerd[1915]: 2026-01-20 01:19:50.095 [INFO][5223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" Namespace="calico-apiserver" Pod="calico-apiserver-84b6496599-7q94c" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--84b6496599--7q94c-eth0" Jan 20 01:19:50.142531 containerd[1915]: time="2026-01-20T01:19:50.141886070Z" level=info msg="StartContainer for \"e35ee03b7bfdd8ba857f4d64e3082349b5efa4e897ef3efa9eb24d53e7311012\" returns successfully" Jan 20 01:19:50.145007 containerd[1915]: time="2026-01-20T01:19:50.144976554Z" level=info msg="connecting to shim 6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23" address="unix:///run/containerd/s/7cf1c21aa636cbfc03ecf839abc0f1121f724e095316672e694a89ffe676214d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:50.171972 systemd[1]: Started cri-containerd-6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23.scope - libcontainer container 6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23. Jan 20 01:19:50.177329 systemd-networkd[1493]: cali66858653dca: Link UP Jan 20 01:19:50.180038 systemd-networkd[1493]: cali66858653dca: Gained carrier Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:49.747 [INFO][5210] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0 calico-apiserver-7465b9f86b- calico-apiserver e522ab69-60c4-4bed-bd35-afe9cd973ba9 824 0 2026-01-20 01:19:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7465b9f86b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-n-d40ac89f78 calico-apiserver-7465b9f86b-8bs6q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali66858653dca [] [] }} ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Namespace="calico-apiserver" Pod="calico-apiserver-7465b9f86b-8bs6q" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:49.747 [INFO][5210] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Namespace="calico-apiserver" Pod="calico-apiserver-7465b9f86b-8bs6q" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:49.807 [INFO][5255] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" HandleID="k8s-pod-network.6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:49.808 [INFO][5255] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" HandleID="k8s-pod-network.6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-n-d40ac89f78", "pod":"calico-apiserver-7465b9f86b-8bs6q", "timestamp":"2026-01-20 01:19:49.807693195 +0000 UTC"}, Hostname:"ci-4459.2.2-n-d40ac89f78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:49.808 [INFO][5255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.059 [INFO][5255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.060 [INFO][5255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-n-d40ac89f78' Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.094 [INFO][5255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.104 [INFO][5255] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.119 [INFO][5255] ipam/ipam.go 511: Trying affinity for 192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.123 [INFO][5255] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.126 [INFO][5255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.126 [INFO][5255] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.138 [INFO][5255] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305 Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.148 [INFO][5255] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.164 [INFO][5255] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.9/26] block=192.168.54.0/26 handle="k8s-pod-network.6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.164 [INFO][5255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.9/26] handle="k8s-pod-network.6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" host="ci-4459.2.2-n-d40ac89f78" Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.164 [INFO][5255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:19:50.201647 containerd[1915]: 2026-01-20 01:19:50.164 [INFO][5255] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.9/26] IPv6=[] ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" HandleID="k8s-pod-network.6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Workload="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" Jan 20 01:19:50.202029 containerd[1915]: 2026-01-20 01:19:50.169 [INFO][5210] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Namespace="calico-apiserver" Pod="calico-apiserver-7465b9f86b-8bs6q" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0", GenerateName:"calico-apiserver-7465b9f86b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e522ab69-60c4-4bed-bd35-afe9cd973ba9", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7465b9f86b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"", Pod:"calico-apiserver-7465b9f86b-8bs6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66858653dca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:50.202029 containerd[1915]: 2026-01-20 01:19:50.169 [INFO][5210] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.9/32] ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Namespace="calico-apiserver" Pod="calico-apiserver-7465b9f86b-8bs6q" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" Jan 20 01:19:50.202029 containerd[1915]: 2026-01-20 01:19:50.169 [INFO][5210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66858653dca ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Namespace="calico-apiserver" Pod="calico-apiserver-7465b9f86b-8bs6q" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" Jan 20 01:19:50.202029 containerd[1915]: 2026-01-20 01:19:50.177 [INFO][5210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Namespace="calico-apiserver" Pod="calico-apiserver-7465b9f86b-8bs6q" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" Jan 20 01:19:50.202029 containerd[1915]: 2026-01-20 01:19:50.180 [INFO][5210] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Namespace="calico-apiserver" Pod="calico-apiserver-7465b9f86b-8bs6q" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0", GenerateName:"calico-apiserver-7465b9f86b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e522ab69-60c4-4bed-bd35-afe9cd973ba9", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7465b9f86b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-n-d40ac89f78", ContainerID:"6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305", Pod:"calico-apiserver-7465b9f86b-8bs6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66858653dca", MAC:"a6:d3:80:c9:88:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:19:50.202029 containerd[1915]: 2026-01-20 01:19:50.199 [INFO][5210] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" Namespace="calico-apiserver" Pod="calico-apiserver-7465b9f86b-8bs6q" WorkloadEndpoint="ci--4459.2.2--n--d40ac89f78-k8s-calico--apiserver--7465b9f86b--8bs6q-eth0" Jan 20 01:19:50.231598 containerd[1915]: time="2026-01-20T01:19:50.231334155Z" level=info msg="connecting to shim 6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305" address="unix:///run/containerd/s/40f12fe263669b735f6253bca887ad5547333dc2e83364fe81b4a520ae2eeb2a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:19:50.273682 systemd[1]: Started cri-containerd-6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305.scope - libcontainer container 6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305. Jan 20 01:19:50.311633 containerd[1915]: time="2026-01-20T01:19:50.311595391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dwz6j,Uid:394686c1-b41d-41ec-8fb4-e2ecac3e5f25,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd21b6b9d70caf010ee0f66b6e6a47fd80b11f43af98a712207ba3a305b1906c\"" Jan 20 01:19:50.315384 containerd[1915]: time="2026-01-20T01:19:50.315293115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:19:50.326643 containerd[1915]: time="2026-01-20T01:19:50.326259683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b6496599-7q94c,Uid:b6739b36-c8fc-46a6-8652-d6a5a25da0c2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6433c83db0c023cb0142e84994f3b6687ddebafa18ab7db31aaf1daa9bbbaa23\"" Jan 20 01:19:50.338188 containerd[1915]: time="2026-01-20T01:19:50.338149661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7465b9f86b-8bs6q,Uid:e522ab69-60c4-4bed-bd35-afe9cd973ba9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6743020c265962c318d78846e8d0f12a7cca3fcb49458e123a7c00989f464305\"" Jan 20 01:19:50.576106 containerd[1915]: time="2026-01-20T01:19:50.576020162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:50.579582 containerd[1915]: time="2026-01-20T01:19:50.579397462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:19:50.579582 containerd[1915]: time="2026-01-20T01:19:50.579480000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:19:50.579879 kubelet[3470]: E0120 01:19:50.579834 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:19:50.579946 kubelet[3470]: E0120 01:19:50.579888 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:19:50.580046 kubelet[3470]: E0120 01:19:50.580021 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dwz6j_calico-system(394686c1-b41d-41ec-8fb4-e2ecac3e5f25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:50.580082 kubelet[3470]: E0120 01:19:50.580052 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:19:50.580798 containerd[1915]: time="2026-01-20T01:19:50.580754794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:19:50.800705 systemd-networkd[1493]: cali84473b47c03: Gained IPv6LL Jan 20 01:19:50.819617 kubelet[3470]: E0120 01:19:50.819473 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:19:50.829879 kubelet[3470]: I0120 01:19:50.829380 3470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6gnsc" podStartSLOduration=40.829368162 podStartE2EDuration="40.829368162s" podCreationTimestamp="2026-01-20 01:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:19:50.8289954 +0000 UTC m=+45.277308969" watchObservedRunningTime="2026-01-20 01:19:50.829368162 +0000 UTC m=+45.277681731" Jan 20 01:19:50.838103 containerd[1915]: time="2026-01-20T01:19:50.837975243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:50.840516 containerd[1915]: time="2026-01-20T01:19:50.840471367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:19:50.840674 containerd[1915]: time="2026-01-20T01:19:50.840600466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:19:50.840887 kubelet[3470]: E0120 01:19:50.840838 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:19:50.841025 kubelet[3470]: E0120 01:19:50.840872 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:19:50.841157 kubelet[3470]: E0120 01:19:50.841140 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84b6496599-7q94c_calico-apiserver(b6739b36-c8fc-46a6-8652-d6a5a25da0c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:50.841512 kubelet[3470]: E0120 01:19:50.841383 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:19:50.841627 containerd[1915]: time="2026-01-20T01:19:50.841467826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:19:51.108325 containerd[1915]: time="2026-01-20T01:19:51.108216012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:51.110651 containerd[1915]: time="2026-01-20T01:19:51.110527947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:19:51.110651 containerd[1915]: time="2026-01-20T01:19:51.110601676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:19:51.111078 kubelet[3470]: E0120 01:19:51.110917 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:19:51.111590 kubelet[3470]: E0120 01:19:51.111080 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:19:51.111590 kubelet[3470]: E0120 01:19:51.111152 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7465b9f86b-8bs6q_calico-apiserver(e522ab69-60c4-4bed-bd35-afe9cd973ba9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:51.111590 kubelet[3470]: E0120 01:19:51.111207 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:19:51.440794 systemd-networkd[1493]: cali8c85411d93c: Gained IPv6LL Jan 20 01:19:51.632709 systemd-networkd[1493]: cali7ce2d34e066: Gained IPv6LL Jan 20 01:19:51.696648 systemd-networkd[1493]: cali66858653dca: Gained IPv6LL Jan 20 01:19:51.823538 kubelet[3470]: E0120 01:19:51.823168 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:19:51.823899 kubelet[3470]: E0120 01:19:51.823609 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:19:51.823899 kubelet[3470]: E0120 01:19:51.823717 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:19:51.889686 systemd-networkd[1493]: cali5e3824aa376: Gained IPv6LL Jan 20 01:19:56.660742 containerd[1915]: time="2026-01-20T01:19:56.660645414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:19:56.923396 containerd[1915]: time="2026-01-20T01:19:56.923284345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:56.925662 containerd[1915]: time="2026-01-20T01:19:56.925626866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:19:56.925715 containerd[1915]: time="2026-01-20T01:19:56.925691644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:19:56.925852 kubelet[3470]: E0120 01:19:56.925793 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:19:56.926084 kubelet[3470]: E0120 01:19:56.925854 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:19:56.926084 kubelet[3470]: E0120 01:19:56.925926 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5f68b67c86-6bqpm_calico-system(55b2502a-51c1-4f19-87b3-fdc15037a275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:56.927208 containerd[1915]: time="2026-01-20T01:19:56.927182830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:19:57.209448 containerd[1915]: time="2026-01-20T01:19:57.209410144Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:19:57.212444 containerd[1915]: time="2026-01-20T01:19:57.212374052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:19:57.212444 containerd[1915]: time="2026-01-20T01:19:57.212409661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:19:57.212639 kubelet[3470]: E0120 01:19:57.212578 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:19:57.212703 kubelet[3470]: E0120 01:19:57.212644 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:19:57.212729 kubelet[3470]: E0120 01:19:57.212707 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5f68b67c86-6bqpm_calico-system(55b2502a-51c1-4f19-87b3-fdc15037a275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:19:57.212770 kubelet[3470]: E0120 01:19:57.212740 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:19:59.662385 containerd[1915]: time="2026-01-20T01:19:59.661855326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:20:00.033834 containerd[1915]: time="2026-01-20T01:20:00.033663786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:00.038881 containerd[1915]: time="2026-01-20T01:20:00.038795554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:20:00.038881 containerd[1915]: time="2026-01-20T01:20:00.038853828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:20:00.039023 kubelet[3470]: E0120 01:20:00.038977 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:00.039023 kubelet[3470]: E0120 01:20:00.039018 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:00.039621 kubelet[3470]: E0120 01:20:00.039089 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84b6496599-bdfgs_calico-apiserver(b0ce4d6d-0160-4871-9c3a-73730559c915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:00.039621 kubelet[3470]: E0120 01:20:00.039121 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:20:02.660519 containerd[1915]: time="2026-01-20T01:20:02.660338211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:20:02.964322 containerd[1915]: time="2026-01-20T01:20:02.964279184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:02.967076 containerd[1915]: time="2026-01-20T01:20:02.967041646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:20:02.967211 containerd[1915]: time="2026-01-20T01:20:02.967112992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:20:02.967388 kubelet[3470]: E0120 01:20:02.967329 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:20:02.967388 kubelet[3470]: E0120 01:20:02.967375 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:20:02.968250 kubelet[3470]: E0120 01:20:02.967689 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:02.969088 containerd[1915]: time="2026-01-20T01:20:02.968668556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:20:03.247143 containerd[1915]: time="2026-01-20T01:20:03.246603810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:03.250303 containerd[1915]: time="2026-01-20T01:20:03.250267522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:20:03.250422 containerd[1915]: time="2026-01-20T01:20:03.250303403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:20:03.250647 kubelet[3470]: E0120 01:20:03.250611 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:20:03.250740 kubelet[3470]: E0120 01:20:03.250728 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:20:03.251058 kubelet[3470]: E0120 01:20:03.251029 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d584bf5d7-gtntj_calico-system(07b4036f-53cd-480d-a4eb-8badfec721c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:03.251140 containerd[1915]: time="2026-01-20T01:20:03.251057032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:20:03.251618 kubelet[3470]: E0120 01:20:03.251584 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:20:03.489051 containerd[1915]: time="2026-01-20T01:20:03.488926726Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:03.491470 containerd[1915]: time="2026-01-20T01:20:03.491441181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:20:03.491623 containerd[1915]: time="2026-01-20T01:20:03.491506102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:20:03.491748 kubelet[3470]: E0120 01:20:03.491716 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:20:03.491792 kubelet[3470]: E0120 01:20:03.491751 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:20:03.491848 kubelet[3470]: E0120 01:20:03.491818 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:03.491886 kubelet[3470]: E0120 01:20:03.491864 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:20:04.661225 containerd[1915]: time="2026-01-20T01:20:04.660994553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:20:04.893160 containerd[1915]: time="2026-01-20T01:20:04.893106660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:04.895479 containerd[1915]: time="2026-01-20T01:20:04.895444278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:20:04.895684 containerd[1915]: time="2026-01-20T01:20:04.895462183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:20:04.895713 kubelet[3470]: E0120 01:20:04.895600 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:04.895713 kubelet[3470]: E0120 01:20:04.895635 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:04.896412 kubelet[3470]: E0120 01:20:04.896085 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84b6496599-7q94c_calico-apiserver(b6739b36-c8fc-46a6-8652-d6a5a25da0c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:04.896412 kubelet[3470]: E0120 01:20:04.896119 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:20:04.896867 containerd[1915]: time="2026-01-20T01:20:04.896847686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:20:05.135591 containerd[1915]: time="2026-01-20T01:20:05.135551002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:05.137872 containerd[1915]: time="2026-01-20T01:20:05.137769801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:20:05.137998 containerd[1915]: time="2026-01-20T01:20:05.137820378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:20:05.138127 kubelet[3470]: E0120 01:20:05.138084 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:20:05.138156 kubelet[3470]: E0120 01:20:05.138146 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:20:05.138550 kubelet[3470]: E0120 01:20:05.138228 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dwz6j_calico-system(394686c1-b41d-41ec-8fb4-e2ecac3e5f25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:05.138550 kubelet[3470]: E0120 01:20:05.138288 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:20:06.661847 containerd[1915]: time="2026-01-20T01:20:06.661811012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:20:06.909974 containerd[1915]: time="2026-01-20T01:20:06.909929482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:06.912684 containerd[1915]: time="2026-01-20T01:20:06.912294589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:20:06.912684 containerd[1915]: time="2026-01-20T01:20:06.912349558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:20:06.912763 kubelet[3470]: E0120 01:20:06.912433 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:06.913164 kubelet[3470]: E0120 01:20:06.913005 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:06.913164 kubelet[3470]: E0120 01:20:06.913095 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7465b9f86b-8bs6q_calico-apiserver(e522ab69-60c4-4bed-bd35-afe9cd973ba9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:06.913164 kubelet[3470]: E0120 01:20:06.913122 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:20:09.662323 kubelet[3470]: E0120 01:20:09.662051 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:20:13.661962 kubelet[3470]: E0120 01:20:13.661762 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:20:16.661140 kubelet[3470]: E0120 01:20:16.660544 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:20:16.661140 kubelet[3470]: E0120 01:20:16.661096 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:20:17.662854 kubelet[3470]: E0120 01:20:17.662787 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:20:19.661513 kubelet[3470]: E0120 01:20:19.661411 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:20:20.663525 kubelet[3470]: E0120 01:20:20.661346 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:20:23.664219 containerd[1915]: time="2026-01-20T01:20:23.664181350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:20:23.939835 containerd[1915]: time="2026-01-20T01:20:23.939778150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:23.942564 containerd[1915]: time="2026-01-20T01:20:23.942478314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:20:23.942564 containerd[1915]: time="2026-01-20T01:20:23.942530043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:20:23.943746 kubelet[3470]: E0120 01:20:23.943705 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:20:23.944028 kubelet[3470]: E0120 01:20:23.943752 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:20:23.944028 kubelet[3470]: E0120 01:20:23.943817 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5f68b67c86-6bqpm_calico-system(55b2502a-51c1-4f19-87b3-fdc15037a275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:23.945329 containerd[1915]: time="2026-01-20T01:20:23.945301057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:20:24.189397 containerd[1915]: time="2026-01-20T01:20:24.189353316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:24.194896 containerd[1915]: time="2026-01-20T01:20:24.194801173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:20:24.194896 containerd[1915]: time="2026-01-20T01:20:24.194883175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:20:24.195253 kubelet[3470]: E0120 01:20:24.195205 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:20:24.195326 kubelet[3470]: E0120 01:20:24.195258 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:20:24.195667 kubelet[3470]: E0120 01:20:24.195328 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5f68b67c86-6bqpm_calico-system(55b2502a-51c1-4f19-87b3-fdc15037a275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:24.195667 kubelet[3470]: E0120 01:20:24.195360 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:20:25.662312 containerd[1915]: time="2026-01-20T01:20:25.662264406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:20:25.965707 containerd[1915]: time="2026-01-20T01:20:25.965539119Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:25.967827 containerd[1915]: time="2026-01-20T01:20:25.967796358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:20:25.967977 containerd[1915]: time="2026-01-20T01:20:25.967800214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:20:25.968140 kubelet[3470]: E0120 01:20:25.968103 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:25.968431 kubelet[3470]: E0120 01:20:25.968148 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:25.968431 kubelet[3470]: E0120 01:20:25.968224 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84b6496599-bdfgs_calico-apiserver(b0ce4d6d-0160-4871-9c3a-73730559c915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:25.968431 kubelet[3470]: E0120 01:20:25.968258 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:20:30.660736 containerd[1915]: time="2026-01-20T01:20:30.660674526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:20:30.931658 containerd[1915]: time="2026-01-20T01:20:30.931386083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:30.934563 containerd[1915]: time="2026-01-20T01:20:30.934530335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:20:30.934636 containerd[1915]: time="2026-01-20T01:20:30.934593785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:20:30.934830 kubelet[3470]: E0120 01:20:30.934782 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:30.935064 kubelet[3470]: E0120 01:20:30.934833 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:30.935064 kubelet[3470]: E0120 01:20:30.934977 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7465b9f86b-8bs6q_calico-apiserver(e522ab69-60c4-4bed-bd35-afe9cd973ba9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:30.935064 kubelet[3470]: E0120 01:20:30.935004 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:20:30.935590 containerd[1915]: time="2026-01-20T01:20:30.935524802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:20:31.171024 containerd[1915]: time="2026-01-20T01:20:31.170969941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:31.173486 containerd[1915]: time="2026-01-20T01:20:31.173389390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:20:31.173486 containerd[1915]: time="2026-01-20T01:20:31.173462912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:20:31.173700 kubelet[3470]: E0120 01:20:31.173603 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:31.173700 kubelet[3470]: E0120 01:20:31.173644 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:20:31.174007 kubelet[3470]: E0120 01:20:31.173717 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84b6496599-7q94c_calico-apiserver(b6739b36-c8fc-46a6-8652-d6a5a25da0c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:31.174007 kubelet[3470]: E0120 01:20:31.173742 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:20:31.661525 containerd[1915]: time="2026-01-20T01:20:31.661375935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:20:31.947628 containerd[1915]: time="2026-01-20T01:20:31.947590348Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:31.950191 containerd[1915]: time="2026-01-20T01:20:31.950160401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:20:31.950264 containerd[1915]: time="2026-01-20T01:20:31.950223587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:20:31.950484 kubelet[3470]: E0120 01:20:31.950416 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:20:31.950484 kubelet[3470]: E0120 01:20:31.950468 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:20:31.950767 kubelet[3470]: E0120 01:20:31.950655 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d584bf5d7-gtntj_calico-system(07b4036f-53cd-480d-a4eb-8badfec721c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:31.950767 kubelet[3470]: E0120 01:20:31.950692 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:20:31.951153 containerd[1915]: time="2026-01-20T01:20:31.951120187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:20:32.183513 containerd[1915]: time="2026-01-20T01:20:32.183465699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:32.186257 containerd[1915]: time="2026-01-20T01:20:32.186176620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:20:32.186257 containerd[1915]: time="2026-01-20T01:20:32.186231805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:20:32.186826 kubelet[3470]: E0120 01:20:32.186477 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:20:32.186826 kubelet[3470]: E0120 01:20:32.186572 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:20:32.186826 kubelet[3470]: E0120 01:20:32.186650 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:32.188004 containerd[1915]: time="2026-01-20T01:20:32.187982052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:20:32.465248 containerd[1915]: time="2026-01-20T01:20:32.465152279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:32.467397 containerd[1915]: time="2026-01-20T01:20:32.467364218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:20:32.467458 containerd[1915]: time="2026-01-20T01:20:32.467442572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:20:32.467633 kubelet[3470]: E0120 01:20:32.467589 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:20:32.467702 kubelet[3470]: E0120 01:20:32.467641 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:20:32.467724 kubelet[3470]: E0120 01:20:32.467711 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:32.467770 kubelet[3470]: E0120 01:20:32.467754 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:20:33.661391 containerd[1915]: time="2026-01-20T01:20:33.661154671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:20:33.913493 containerd[1915]: time="2026-01-20T01:20:33.913389901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:20:33.916372 containerd[1915]: time="2026-01-20T01:20:33.916339508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:20:33.916486 containerd[1915]: time="2026-01-20T01:20:33.916394029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:20:33.916732 kubelet[3470]: E0120 01:20:33.916575 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:20:33.916732 kubelet[3470]: E0120 01:20:33.916614 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:20:33.916732 kubelet[3470]: E0120 01:20:33.916678 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dwz6j_calico-system(394686c1-b41d-41ec-8fb4-e2ecac3e5f25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:20:33.916732 kubelet[3470]: E0120 01:20:33.916703 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:20:38.661315 kubelet[3470]: E0120 01:20:38.661251 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:20:39.661041 kubelet[3470]: E0120 01:20:39.660999 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:20:42.660727 kubelet[3470]: E0120 01:20:42.660634 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:20:45.662324 kubelet[3470]: E0120 01:20:45.661739 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:20:45.664563 kubelet[3470]: E0120 01:20:45.663436 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:20:45.666674 kubelet[3470]: E0120 01:20:45.666600 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:20:48.660291 kubelet[3470]: E0120 01:20:48.660113 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:20:48.812541 systemd[1]: Started sshd@7-10.200.20.20:22-10.200.16.10:37274.service - OpenSSH per-connection server daemon (10.200.16.10:37274). Jan 20 01:20:49.273383 sshd[5628]: Accepted publickey for core from 10.200.16.10 port 37274 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:20:49.274818 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:20:49.279338 systemd-logind[1897]: New session 10 of user core. Jan 20 01:20:49.288706 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:20:49.662162 sshd[5634]: Connection closed by 10.200.16.10 port 37274 Jan 20 01:20:49.662838 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Jan 20 01:20:49.667141 systemd[1]: sshd@7-10.200.20.20:22-10.200.16.10:37274.service: Deactivated successfully. Jan 20 01:20:49.670038 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:20:49.671291 systemd-logind[1897]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:20:49.673895 systemd-logind[1897]: Removed session 10. Jan 20 01:20:51.662097 kubelet[3470]: E0120 01:20:51.662045 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:20:53.662839 kubelet[3470]: E0120 01:20:53.662795 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:20:54.660987 kubelet[3470]: E0120 01:20:54.660889 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:20:54.756691 systemd[1]: Started sshd@8-10.200.20.20:22-10.200.16.10:53378.service - OpenSSH per-connection server daemon (10.200.16.10:53378). Jan 20 01:20:55.253219 sshd[5648]: Accepted publickey for core from 10.200.16.10 port 53378 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:20:55.254320 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:20:55.258342 systemd-logind[1897]: New session 11 of user core. Jan 20 01:20:55.265616 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:20:55.662627 sshd[5651]: Connection closed by 10.200.16.10 port 53378 Jan 20 01:20:55.663699 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Jan 20 01:20:55.667201 systemd[1]: sshd@8-10.200.20.20:22-10.200.16.10:53378.service: Deactivated successfully. Jan 20 01:20:55.669873 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:20:55.670879 systemd-logind[1897]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:20:55.671918 systemd-logind[1897]: Removed session 11. Jan 20 01:20:56.662361 kubelet[3470]: E0120 01:20:56.662235 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:20:58.661881 kubelet[3470]: E0120 01:20:58.661399 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:20:59.662446 kubelet[3470]: E0120 01:20:59.661942 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:21:00.660607 kubelet[3470]: E0120 01:21:00.660486 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:21:00.768877 systemd[1]: Started sshd@9-10.200.20.20:22-10.200.16.10:35532.service - OpenSSH per-connection server daemon (10.200.16.10:35532). Jan 20 01:21:01.223151 sshd[5664]: Accepted publickey for core from 10.200.16.10 port 35532 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:01.224163 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:01.228391 systemd-logind[1897]: New session 12 of user core. Jan 20 01:21:01.239658 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:21:01.600863 sshd[5667]: Connection closed by 10.200.16.10 port 35532 Jan 20 01:21:01.601314 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:01.605065 systemd[1]: sshd@9-10.200.20.20:22-10.200.16.10:35532.service: Deactivated successfully. Jan 20 01:21:01.606626 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:21:01.607286 systemd-logind[1897]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:21:01.608493 systemd-logind[1897]: Removed session 12. Jan 20 01:21:01.681243 systemd[1]: Started sshd@10-10.200.20.20:22-10.200.16.10:35546.service - OpenSSH per-connection server daemon (10.200.16.10:35546). Jan 20 01:21:02.137521 sshd[5679]: Accepted publickey for core from 10.200.16.10 port 35546 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:02.138973 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:02.143832 systemd-logind[1897]: New session 13 of user core. Jan 20 01:21:02.148958 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:21:02.557341 sshd[5682]: Connection closed by 10.200.16.10 port 35546 Jan 20 01:21:02.557897 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:02.561240 systemd[1]: sshd@10-10.200.20.20:22-10.200.16.10:35546.service: Deactivated successfully. Jan 20 01:21:02.565335 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:21:02.566452 systemd-logind[1897]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:21:02.568039 systemd-logind[1897]: Removed session 13. Jan 20 01:21:02.646714 systemd[1]: Started sshd@11-10.200.20.20:22-10.200.16.10:35550.service - OpenSSH per-connection server daemon (10.200.16.10:35550). Jan 20 01:21:02.662319 kubelet[3470]: E0120 01:21:02.662049 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:21:03.140098 sshd[5698]: Accepted publickey for core from 10.200.16.10 port 35550 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:03.141115 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:03.144766 systemd-logind[1897]: New session 14 of user core. Jan 20 01:21:03.151640 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:21:03.546460 sshd[5701]: Connection closed by 10.200.16.10 port 35550 Jan 20 01:21:03.547944 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:03.553700 systemd-logind[1897]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:21:03.554221 systemd[1]: sshd@11-10.200.20.20:22-10.200.16.10:35550.service: Deactivated successfully. Jan 20 01:21:03.556973 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:21:03.559599 systemd-logind[1897]: Removed session 14. Jan 20 01:21:05.662485 containerd[1915]: time="2026-01-20T01:21:05.662432048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:21:05.927015 containerd[1915]: time="2026-01-20T01:21:05.926786743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:21:05.929328 containerd[1915]: time="2026-01-20T01:21:05.929217427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:21:05.929328 containerd[1915]: time="2026-01-20T01:21:05.929300229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:21:05.929464 kubelet[3470]: E0120 01:21:05.929423 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:21:05.929951 kubelet[3470]: E0120 01:21:05.929465 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:21:05.929951 kubelet[3470]: E0120 01:21:05.929544 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5f68b67c86-6bqpm_calico-system(55b2502a-51c1-4f19-87b3-fdc15037a275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:21:05.931079 containerd[1915]: time="2026-01-20T01:21:05.931051534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:21:06.205234 containerd[1915]: time="2026-01-20T01:21:06.205063147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:21:06.207411 containerd[1915]: time="2026-01-20T01:21:06.207332858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:21:06.207411 containerd[1915]: time="2026-01-20T01:21:06.207369955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:21:06.207654 kubelet[3470]: E0120 01:21:06.207533 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:21:06.207654 kubelet[3470]: E0120 01:21:06.207573 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:21:06.207845 kubelet[3470]: E0120 01:21:06.207795 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5f68b67c86-6bqpm_calico-system(55b2502a-51c1-4f19-87b3-fdc15037a275): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:21:06.208012 kubelet[3470]: E0120 01:21:06.207832 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:21:08.630700 systemd[1]: Started sshd@12-10.200.20.20:22-10.200.16.10:35566.service - OpenSSH per-connection server daemon (10.200.16.10:35566). Jan 20 01:21:08.664406 kubelet[3470]: E0120 01:21:08.664339 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:21:09.094174 sshd[5719]: Accepted publickey for core from 10.200.16.10 port 35566 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:09.095282 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:09.101216 systemd-logind[1897]: New session 15 of user core. Jan 20 01:21:09.104618 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:21:09.491768 sshd[5722]: Connection closed by 10.200.16.10 port 35566 Jan 20 01:21:09.491290 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:09.494228 systemd[1]: sshd@12-10.200.20.20:22-10.200.16.10:35566.service: Deactivated successfully. Jan 20 01:21:09.496644 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:21:09.499392 systemd-logind[1897]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:21:09.501239 systemd-logind[1897]: Removed session 15. Jan 20 01:21:09.660538 kubelet[3470]: E0120 01:21:09.660472 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:21:10.660583 kubelet[3470]: E0120 01:21:10.660469 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:21:11.664260 kubelet[3470]: E0120 01:21:11.664186 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:21:13.662363 containerd[1915]: time="2026-01-20T01:21:13.662321692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:21:13.944061 containerd[1915]: time="2026-01-20T01:21:13.944011684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:21:13.946441 containerd[1915]: time="2026-01-20T01:21:13.946410615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:21:13.946549 containerd[1915]: time="2026-01-20T01:21:13.946469713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:21:13.946670 kubelet[3470]: E0120 01:21:13.946616 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:21:13.946903 kubelet[3470]: E0120 01:21:13.946674 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:21:13.946903 kubelet[3470]: E0120 01:21:13.946755 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d584bf5d7-gtntj_calico-system(07b4036f-53cd-480d-a4eb-8badfec721c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:21:13.946903 kubelet[3470]: E0120 01:21:13.946783 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:21:14.578692 systemd[1]: Started sshd@13-10.200.20.20:22-10.200.16.10:34178.service - OpenSSH per-connection server daemon (10.200.16.10:34178). Jan 20 01:21:14.661149 containerd[1915]: time="2026-01-20T01:21:14.661034897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:21:14.948927 containerd[1915]: time="2026-01-20T01:21:14.948885053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:21:14.955585 containerd[1915]: time="2026-01-20T01:21:14.955544286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:21:14.955662 containerd[1915]: time="2026-01-20T01:21:14.955628817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:21:14.955851 kubelet[3470]: E0120 01:21:14.955787 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:21:14.955851 kubelet[3470]: E0120 01:21:14.955836 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:21:14.956302 kubelet[3470]: E0120 01:21:14.956189 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84b6496599-bdfgs_calico-apiserver(b0ce4d6d-0160-4871-9c3a-73730559c915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:21:14.956670 kubelet[3470]: E0120 01:21:14.956376 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:21:15.028968 sshd[5745]: Accepted publickey for core from 10.200.16.10 port 34178 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:15.030613 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:15.036303 systemd-logind[1897]: New session 16 of user core. Jan 20 01:21:15.041619 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:21:15.410134 sshd[5748]: Connection closed by 10.200.16.10 port 34178 Jan 20 01:21:15.410840 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:15.414260 systemd[1]: sshd@13-10.200.20.20:22-10.200.16.10:34178.service: Deactivated successfully. Jan 20 01:21:15.417114 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:21:15.417892 systemd-logind[1897]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:21:15.420195 systemd-logind[1897]: Removed session 16. Jan 20 01:21:20.496274 systemd[1]: Started sshd@14-10.200.20.20:22-10.200.16.10:48292.service - OpenSSH per-connection server daemon (10.200.16.10:48292). Jan 20 01:21:20.992266 sshd[5800]: Accepted publickey for core from 10.200.16.10 port 48292 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:20.993082 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:20.999984 systemd-logind[1897]: New session 17 of user core. Jan 20 01:21:21.005623 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:21:21.399671 sshd[5803]: Connection closed by 10.200.16.10 port 48292 Jan 20 01:21:21.400283 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:21.403837 systemd-logind[1897]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:21:21.404958 systemd[1]: sshd@14-10.200.20.20:22-10.200.16.10:48292.service: Deactivated successfully. Jan 20 01:21:21.406721 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:21:21.410849 systemd-logind[1897]: Removed session 17. Jan 20 01:21:21.476684 systemd[1]: Started sshd@15-10.200.20.20:22-10.200.16.10:48302.service - OpenSSH per-connection server daemon (10.200.16.10:48302). Jan 20 01:21:21.664187 containerd[1915]: time="2026-01-20T01:21:21.663688792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:21:21.665026 kubelet[3470]: E0120 01:21:21.664909 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:21:21.883407 containerd[1915]: time="2026-01-20T01:21:21.883371186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:21:21.886514 containerd[1915]: time="2026-01-20T01:21:21.886458320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:21:21.886580 containerd[1915]: time="2026-01-20T01:21:21.886519177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:21:21.886852 kubelet[3470]: E0120 01:21:21.886798 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:21:21.886852 kubelet[3470]: E0120 01:21:21.886836 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:21:21.887084 kubelet[3470]: E0120 01:21:21.887027 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84b6496599-7q94c_calico-apiserver(b6739b36-c8fc-46a6-8652-d6a5a25da0c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:21:21.887084 kubelet[3470]: E0120 01:21:21.887060 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:21:21.935168 sshd[5815]: Accepted publickey for core from 10.200.16.10 port 48302 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:21.935832 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:21.940721 systemd-logind[1897]: New session 18 of user core. Jan 20 01:21:21.944710 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:21:22.429513 sshd[5818]: Connection closed by 10.200.16.10 port 48302 Jan 20 01:21:22.429908 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:22.434188 systemd[1]: sshd@15-10.200.20.20:22-10.200.16.10:48302.service: Deactivated successfully. Jan 20 01:21:22.434300 systemd-logind[1897]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:21:22.435976 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:21:22.438242 systemd-logind[1897]: Removed session 18. Jan 20 01:21:22.519009 systemd[1]: Started sshd@16-10.200.20.20:22-10.200.16.10:48308.service - OpenSSH per-connection server daemon (10.200.16.10:48308). Jan 20 01:21:22.661532 containerd[1915]: time="2026-01-20T01:21:22.661432833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:21:22.933166 containerd[1915]: time="2026-01-20T01:21:22.933122357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:21:22.935860 containerd[1915]: time="2026-01-20T01:21:22.935827647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:21:22.936043 containerd[1915]: time="2026-01-20T01:21:22.935901609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:21:22.936184 kubelet[3470]: E0120 01:21:22.936150 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:21:22.936878 kubelet[3470]: E0120 01:21:22.936444 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:21:22.936878 kubelet[3470]: E0120 01:21:22.936542 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7465b9f86b-8bs6q_calico-apiserver(e522ab69-60c4-4bed-bd35-afe9cd973ba9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:21:22.936878 kubelet[3470]: E0120 01:21:22.936572 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:21:22.971416 sshd[5828]: Accepted publickey for core from 10.200.16.10 port 48308 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:22.972448 sshd-session[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:22.979552 systemd-logind[1897]: New session 19 of user core. Jan 20 01:21:22.982616 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:21:23.662673 containerd[1915]: time="2026-01-20T01:21:23.662590937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:21:23.830022 sshd[5831]: Connection closed by 10.200.16.10 port 48308 Jan 20 01:21:23.830925 sshd-session[5828]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:23.834658 systemd-logind[1897]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:21:23.835051 systemd[1]: sshd@16-10.200.20.20:22-10.200.16.10:48308.service: Deactivated successfully. Jan 20 01:21:23.836955 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:21:23.838968 systemd-logind[1897]: Removed session 19. Jan 20 01:21:23.908851 containerd[1915]: time="2026-01-20T01:21:23.908807834Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:21:23.912041 containerd[1915]: time="2026-01-20T01:21:23.911975353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:21:23.912041 containerd[1915]: time="2026-01-20T01:21:23.912010818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:21:23.912232 kubelet[3470]: E0120 01:21:23.912195 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:21:23.912313 kubelet[3470]: E0120 01:21:23.912237 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:21:23.912355 kubelet[3470]: E0120 01:21:23.912306 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:21:23.913592 containerd[1915]: time="2026-01-20T01:21:23.913386912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:21:23.921463 systemd[1]: Started sshd@17-10.200.20.20:22-10.200.16.10:48324.service - OpenSSH per-connection server daemon (10.200.16.10:48324). Jan 20 01:21:24.194596 containerd[1915]: time="2026-01-20T01:21:24.194547579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:21:24.197337 containerd[1915]: time="2026-01-20T01:21:24.197295511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:21:24.197430 containerd[1915]: time="2026-01-20T01:21:24.197409218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:21:24.198054 kubelet[3470]: E0120 01:21:24.198011 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:21:24.198358 kubelet[3470]: E0120 01:21:24.198065 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:21:24.198358 kubelet[3470]: E0120 01:21:24.198127 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2cs6z_calico-system(8fa1e625-99ce-4678-80e8-ad10255fcf43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:21:24.198358 kubelet[3470]: E0120 01:21:24.198161 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:21:24.417818 sshd[5851]: Accepted publickey for core from 10.200.16.10 port 48324 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:24.418859 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:24.423235 systemd-logind[1897]: New session 20 of user core. Jan 20 01:21:24.437651 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:21:24.661913 containerd[1915]: time="2026-01-20T01:21:24.661794110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:21:24.916669 sshd[5854]: Connection closed by 10.200.16.10 port 48324 Jan 20 01:21:24.917108 sshd-session[5851]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:24.920008 systemd[1]: sshd@17-10.200.20.20:22-10.200.16.10:48324.service: Deactivated successfully. Jan 20 01:21:24.921411 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:21:24.922469 systemd-logind[1897]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:21:24.923699 systemd-logind[1897]: Removed session 20. Jan 20 01:21:24.941201 containerd[1915]: time="2026-01-20T01:21:24.941109279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:21:24.944904 containerd[1915]: time="2026-01-20T01:21:24.944792860Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:21:24.945012 containerd[1915]: time="2026-01-20T01:21:24.944949449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:21:24.945213 kubelet[3470]: E0120 01:21:24.945173 3470 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:21:24.945270 kubelet[3470]: E0120 01:21:24.945218 3470 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:21:24.945316 kubelet[3470]: E0120 01:21:24.945299 3470 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dwz6j_calico-system(394686c1-b41d-41ec-8fb4-e2ecac3e5f25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:21:24.945343 kubelet[3470]: E0120 01:21:24.945329 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:21:25.012680 systemd[1]: Started sshd@18-10.200.20.20:22-10.200.16.10:48332.service - OpenSSH per-connection server daemon (10.200.16.10:48332). Jan 20 01:21:25.503803 sshd[5866]: Accepted publickey for core from 10.200.16.10 port 48332 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:25.504899 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:25.508337 systemd-logind[1897]: New session 21 of user core. Jan 20 01:21:25.518785 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:21:25.661687 kubelet[3470]: E0120 01:21:25.660851 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:21:25.896135 sshd[5869]: Connection closed by 10.200.16.10 port 48332 Jan 20 01:21:25.896917 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:25.901667 systemd[1]: sshd@18-10.200.20.20:22-10.200.16.10:48332.service: Deactivated successfully. Jan 20 01:21:25.904294 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:21:25.905571 systemd-logind[1897]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:21:25.908546 systemd-logind[1897]: Removed session 21. Jan 20 01:21:28.660074 kubelet[3470]: E0120 01:21:28.660024 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:21:30.525400 waagent[2146]: 2026-01-20T01:21:30.525345Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 20 01:21:30.533127 waagent[2146]: 2026-01-20T01:21:30.533092Z INFO ExtHandler Jan 20 01:21:30.533231 waagent[2146]: 2026-01-20T01:21:30.533179Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4c9f0e16-c96b-4e87-afd7-a08656fc862b eTag: 538990307726322475 source: Fabric] Jan 20 01:21:30.533460 waagent[2146]: 2026-01-20T01:21:30.533427Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 01:21:30.534027 waagent[2146]: 2026-01-20T01:21:30.533993Z INFO ExtHandler Jan 20 01:21:30.534075 waagent[2146]: 2026-01-20T01:21:30.534058Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 20 01:21:30.590819 waagent[2146]: 2026-01-20T01:21:30.590758Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 01:21:30.642036 waagent[2146]: 2026-01-20T01:21:30.641977Z INFO ExtHandler Downloaded certificate {'thumbprint': '4AF552A4492492341E68C28716B2E2F11C78B645', 'hasPrivateKey': True} Jan 20 01:21:30.642410 waagent[2146]: 2026-01-20T01:21:30.642380Z INFO ExtHandler Fetch goal state completed Jan 20 01:21:30.642758 waagent[2146]: 2026-01-20T01:21:30.642694Z INFO ExtHandler ExtHandler Jan 20 01:21:30.642809 waagent[2146]: 2026-01-20T01:21:30.642792Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 9378cb3d-7492-4e93-aea2-55431a665107 correlation 8e8960be-7a5a-435c-8dba-b391a8a5a579 created: 2026-01-20T01:21:24.498970Z] Jan 20 01:21:30.643034 waagent[2146]: 2026-01-20T01:21:30.643007Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 01:21:30.643425 waagent[2146]: 2026-01-20T01:21:30.643397Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 20 01:21:30.983738 systemd[1]: Started sshd@19-10.200.20.20:22-10.200.16.10:55166.service - OpenSSH per-connection server daemon (10.200.16.10:55166). Jan 20 01:21:31.443705 sshd[5888]: Accepted publickey for core from 10.200.16.10 port 55166 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:31.445423 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:31.450943 systemd-logind[1897]: New session 22 of user core. Jan 20 01:21:31.455670 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:21:31.813847 sshd[5891]: Connection closed by 10.200.16.10 port 55166 Jan 20 01:21:31.814374 sshd-session[5888]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:31.818345 systemd[1]: sshd@19-10.200.20.20:22-10.200.16.10:55166.service: Deactivated successfully. Jan 20 01:21:31.820832 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:21:31.821988 systemd-logind[1897]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:21:31.823657 systemd-logind[1897]: Removed session 22. Jan 20 01:21:34.661574 kubelet[3470]: E0120 01:21:34.661402 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:21:34.662335 kubelet[3470]: E0120 01:21:34.662292 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:21:36.661816 kubelet[3470]: E0120 01:21:36.661757 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:21:36.663468 kubelet[3470]: E0120 01:21:36.662232 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:21:36.663468 kubelet[3470]: E0120 01:21:36.663263 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:21:36.897691 systemd[1]: Started sshd@20-10.200.20.20:22-10.200.16.10:55182.service - OpenSSH per-connection server daemon (10.200.16.10:55182). Jan 20 01:21:37.353816 sshd[5903]: Accepted publickey for core from 10.200.16.10 port 55182 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:37.355404 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:37.360801 systemd-logind[1897]: New session 23 of user core. Jan 20 01:21:37.366596 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:21:37.664455 kubelet[3470]: E0120 01:21:37.664267 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:21:37.743254 sshd[5906]: Connection closed by 10.200.16.10 port 55182 Jan 20 01:21:37.760393 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:37.763847 systemd[1]: sshd@20-10.200.20.20:22-10.200.16.10:55182.service: Deactivated successfully. Jan 20 01:21:37.766149 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:21:37.768395 systemd-logind[1897]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:21:37.771163 systemd-logind[1897]: Removed session 23. Jan 20 01:21:42.833693 systemd[1]: Started sshd@21-10.200.20.20:22-10.200.16.10:54650.service - OpenSSH per-connection server daemon (10.200.16.10:54650). Jan 20 01:21:43.327470 sshd[5921]: Accepted publickey for core from 10.200.16.10 port 54650 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:43.328657 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:43.333145 systemd-logind[1897]: New session 24 of user core. Jan 20 01:21:43.339622 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:21:43.664300 kubelet[3470]: E0120 01:21:43.662369 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915" Jan 20 01:21:43.761860 sshd[5924]: Connection closed by 10.200.16.10 port 54650 Jan 20 01:21:43.763771 sshd-session[5921]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:43.767493 systemd-logind[1897]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:21:43.768305 systemd[1]: sshd@21-10.200.20.20:22-10.200.16.10:54650.service: Deactivated successfully. Jan 20 01:21:43.770783 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:21:43.773030 systemd-logind[1897]: Removed session 24. Jan 20 01:21:46.660893 kubelet[3470]: E0120 01:21:46.660771 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7465b9f86b-8bs6q" podUID="e522ab69-60c4-4bed-bd35-afe9cd973ba9" Jan 20 01:21:47.660807 kubelet[3470]: E0120 01:21:47.660675 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-7q94c" podUID="b6739b36-c8fc-46a6-8652-d6a5a25da0c2" Jan 20 01:21:47.662729 kubelet[3470]: E0120 01:21:47.662695 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2cs6z" podUID="8fa1e625-99ce-4678-80e8-ad10255fcf43" Jan 20 01:21:48.852712 systemd[1]: Started sshd@22-10.200.20.20:22-10.200.16.10:54654.service - OpenSSH per-connection server daemon (10.200.16.10:54654). Jan 20 01:21:49.343327 sshd[5959]: Accepted publickey for core from 10.200.16.10 port 54654 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:49.344133 sshd-session[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:49.348729 systemd-logind[1897]: New session 25 of user core. Jan 20 01:21:49.353637 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:21:49.738545 sshd[5962]: Connection closed by 10.200.16.10 port 54654 Jan 20 01:21:49.738325 sshd-session[5959]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:49.742681 systemd[1]: sshd@22-10.200.20.20:22-10.200.16.10:54654.service: Deactivated successfully. Jan 20 01:21:49.744238 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:21:49.746614 systemd-logind[1897]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:21:49.748532 systemd-logind[1897]: Removed session 25. Jan 20 01:21:50.660388 kubelet[3470]: E0120 01:21:50.660319 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d584bf5d7-gtntj" podUID="07b4036f-53cd-480d-a4eb-8badfec721c3" Jan 20 01:21:51.663991 kubelet[3470]: E0120 01:21:51.663931 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f68b67c86-6bqpm" podUID="55b2502a-51c1-4f19-87b3-fdc15037a275" Jan 20 01:21:52.661966 kubelet[3470]: E0120 01:21:52.661699 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dwz6j" podUID="394686c1-b41d-41ec-8fb4-e2ecac3e5f25" Jan 20 01:21:54.830681 systemd[1]: Started sshd@23-10.200.20.20:22-10.200.16.10:53492.service - OpenSSH per-connection server daemon (10.200.16.10:53492). Jan 20 01:21:55.331783 sshd[5974]: Accepted publickey for core from 10.200.16.10 port 53492 ssh2: RSA SHA256:PcB152JMplBskgr3eDISthAmWFornwPn8szIvd0cqKg Jan 20 01:21:55.333303 sshd-session[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:21:55.337392 systemd-logind[1897]: New session 26 of user core. Jan 20 01:21:55.341669 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:21:55.727554 sshd[5977]: Connection closed by 10.200.16.10 port 53492 Jan 20 01:21:55.727443 sshd-session[5974]: pam_unix(sshd:session): session closed for user core Jan 20 01:21:55.730976 systemd-logind[1897]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:21:55.731523 systemd[1]: sshd@23-10.200.20.20:22-10.200.16.10:53492.service: Deactivated successfully. Jan 20 01:21:55.733354 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:21:55.735982 systemd-logind[1897]: Removed session 26. Jan 20 01:21:56.660570 kubelet[3470]: E0120 01:21:56.660186 3470 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b6496599-bdfgs" podUID="b0ce4d6d-0160-4871-9c3a-73730559c915"