Jan 28 01:25:06.198004 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 28 01:25:06.198028 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Jan 27 23:05:14 -00 2026 Jan 28 01:25:06.198036 kernel: KASLR enabled Jan 28 01:25:06.198042 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 01:25:06.198049 kernel: printk: bootconsole [pl11] enabled Jan 28 01:25:06.198055 kernel: efi: EFI v2.7 by EDK II Jan 28 01:25:06.198062 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8a98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 28 01:25:06.198068 kernel: random: crng init done Jan 28 01:25:06.198075 kernel: ACPI: Early table checksum verification disabled Jan 28 01:25:06.198081 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 28 01:25:06.198087 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198092 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198100 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 01:25:06.198106 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198113 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198119 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198126 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198134 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198140 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198146 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 01:25:06.198153 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198159 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 01:25:06.198166 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 28 01:25:06.198172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 28 01:25:06.198178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 28 01:25:06.198184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 28 01:25:06.198191 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 28 01:25:06.198197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 28 01:25:06.198205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 28 01:25:06.198211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 28 01:25:06.198218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 28 01:25:06.198224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 28 01:25:06.198230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 28 01:25:06.198237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 28 01:25:06.198243 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 28 01:25:06.198249 kernel: Zone ranges: Jan 28 01:25:06.198256 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 01:25:06.198262 kernel: DMA32 empty Jan 28 01:25:06.198268 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:25:06.198275 kernel: Movable zone start for each node Jan 28 01:25:06.198285 kernel: Early memory node ranges Jan 28 01:25:06.198292 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 01:25:06.198299 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 28 01:25:06.198305 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 28 01:25:06.198313 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 28 01:25:06.198321 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 28 01:25:06.198328 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 28 01:25:06.198335 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:25:06.198341 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 01:25:06.198348 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 01:25:06.198355 kernel: psci: probing for conduit method from ACPI. Jan 28 01:25:06.198362 kernel: psci: PSCIv1.1 detected in firmware. Jan 28 01:25:06.198368 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 01:25:06.198375 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 01:25:06.198382 kernel: psci: SMC Calling Convention v1.4 Jan 28 01:25:06.198388 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 01:25:06.198395 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 01:25:06.198403 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 28 01:25:06.198410 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 28 01:25:06.198417 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 01:25:06.198423 kernel: Detected PIPT I-cache on CPU0 Jan 28 01:25:06.198430 kernel: CPU features: detected: GIC system register CPU interface Jan 28 01:25:06.198437 kernel: CPU features: detected: Hardware dirty bit management Jan 28 01:25:06.198444 kernel: CPU features: detected: Spectre-BHB Jan 28 01:25:06.198450 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 01:25:06.198457 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 01:25:06.198464 kernel: CPU features: detected: ARM erratum 1418040 Jan 28 01:25:06.198471 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 28 01:25:06.200504 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 01:25:06.200527 kernel: alternatives: applying boot alternatives Jan 28 01:25:06.200536 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:25:06.200545 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:25:06.200552 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:25:06.200559 kernel: Fallback order for Node 0: 0 Jan 28 01:25:06.200566 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 28 01:25:06.200572 kernel: Policy zone: Normal Jan 28 01:25:06.200579 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:25:06.200586 kernel: software IO TLB: area num 2. Jan 28 01:25:06.200593 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 28 01:25:06.200605 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 28 01:25:06.200612 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 01:25:06.200619 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:25:06.200626 kernel: rcu: RCU event tracing is enabled. Jan 28 01:25:06.200633 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 01:25:06.200640 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:25:06.200647 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:25:06.200654 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:25:06.200660 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 01:25:06.200667 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 01:25:06.200674 kernel: GICv3: 960 SPIs implemented Jan 28 01:25:06.200682 kernel: GICv3: 0 Extended SPIs implemented Jan 28 01:25:06.200690 kernel: Root IRQ handler: gic_handle_irq Jan 28 01:25:06.200696 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 01:25:06.200703 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 01:25:06.200710 kernel: ITS: No ITS available, not enabling LPIs Jan 28 01:25:06.200717 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:25:06.200724 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:25:06.200731 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 28 01:25:06.200738 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 28 01:25:06.200744 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 28 01:25:06.200751 kernel: Console: colour dummy device 80x25 Jan 28 01:25:06.200760 kernel: printk: console [tty1] enabled Jan 28 01:25:06.200768 kernel: ACPI: Core revision 20230628 Jan 28 01:25:06.200775 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 28 01:25:06.200782 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:25:06.200789 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:25:06.200796 kernel: landlock: Up and running. Jan 28 01:25:06.200803 kernel: SELinux: Initializing. Jan 28 01:25:06.200810 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:25:06.200817 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:25:06.200826 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:25:06.200833 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:25:06.200840 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 28 01:25:06.200847 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 28 01:25:06.200854 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 01:25:06.200861 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:25:06.200868 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:25:06.200875 kernel: Remapping and enabling EFI services. Jan 28 01:25:06.200889 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:25:06.200896 kernel: Detected PIPT I-cache on CPU1 Jan 28 01:25:06.200903 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 01:25:06.200911 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:25:06.200920 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 28 01:25:06.200927 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 01:25:06.200934 kernel: SMP: Total of 2 processors activated. Jan 28 01:25:06.200942 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 01:25:06.200950 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 01:25:06.200959 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 01:25:06.200966 kernel: CPU features: detected: CRC32 instructions Jan 28 01:25:06.200974 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 01:25:06.200981 kernel: CPU features: detected: LSE atomic instructions Jan 28 01:25:06.200988 kernel: CPU features: detected: Privileged Access Never Jan 28 01:25:06.200996 kernel: CPU: All CPU(s) started at EL1 Jan 28 01:25:06.201003 kernel: alternatives: applying system-wide alternatives Jan 28 01:25:06.201010 kernel: devtmpfs: initialized Jan 28 01:25:06.201018 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:25:06.201027 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 01:25:06.201035 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:25:06.201042 kernel: SMBIOS 3.1.0 present. Jan 28 01:25:06.201049 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 28 01:25:06.201057 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:25:06.201064 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 01:25:06.201071 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 01:25:06.201079 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 01:25:06.201086 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:25:06.201095 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 28 01:25:06.201103 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:25:06.201110 kernel: cpuidle: using governor menu Jan 28 01:25:06.201118 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 01:25:06.201125 kernel: ASID allocator initialised with 32768 entries Jan 28 01:25:06.201133 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:25:06.201140 kernel: Serial: AMBA PL011 UART driver Jan 28 01:25:06.201147 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 01:25:06.201155 kernel: Modules: 0 pages in range for non-PLT usage Jan 28 01:25:06.201164 kernel: Modules: 509008 pages in range for PLT usage Jan 28 01:25:06.201171 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:25:06.201178 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:25:06.201186 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 01:25:06.201193 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 01:25:06.201201 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:25:06.201208 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:25:06.201215 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 01:25:06.201223 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 01:25:06.201232 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:25:06.201241 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:25:06.201248 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:25:06.201255 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:25:06.201263 kernel: ACPI: Interpreter enabled Jan 28 01:25:06.201270 kernel: ACPI: Using GIC for interrupt routing Jan 28 01:25:06.201278 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 01:25:06.201285 kernel: printk: console [ttyAMA0] enabled Jan 28 01:25:06.201292 kernel: printk: bootconsole [pl11] disabled Jan 28 01:25:06.201301 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 01:25:06.201309 kernel: iommu: Default domain type: Translated Jan 28 01:25:06.201316 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 01:25:06.201323 kernel: efivars: Registered efivars operations Jan 28 01:25:06.201330 kernel: vgaarb: loaded Jan 28 01:25:06.201338 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 01:25:06.201345 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:25:06.201353 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:25:06.201360 kernel: pnp: PnP ACPI init Jan 28 01:25:06.201369 kernel: pnp: PnP ACPI: found 0 devices Jan 28 01:25:06.201376 kernel: NET: Registered PF_INET protocol family Jan 28 01:25:06.201384 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:25:06.201391 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:25:06.201399 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:25:06.201406 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:25:06.201414 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:25:06.201421 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:25:06.201429 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:25:06.201438 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:25:06.201445 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:25:06.201453 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:25:06.201460 kernel: kvm [1]: HYP mode not available Jan 28 01:25:06.201467 kernel: Initialise system trusted keyrings Jan 28 01:25:06.201475 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:25:06.201490 kernel: Key type asymmetric registered Jan 28 01:25:06.201497 kernel: Asymmetric key parser 'x509' registered Jan 28 01:25:06.201505 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:25:06.201515 kernel: io scheduler mq-deadline registered Jan 28 01:25:06.201522 kernel: io scheduler kyber registered Jan 28 01:25:06.201529 kernel: io scheduler bfq registered Jan 28 01:25:06.201537 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:25:06.201544 kernel: thunder_xcv, ver 1.0 Jan 28 01:25:06.201551 kernel: thunder_bgx, ver 1.0 Jan 28 01:25:06.201558 kernel: nicpf, ver 1.0 Jan 28 01:25:06.201566 kernel: nicvf, ver 1.0 Jan 28 01:25:06.201709 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 01:25:06.201788 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T01:25:05 UTC (1769563505) Jan 28 01:25:06.201799 kernel: efifb: probing for efifb Jan 28 01:25:06.201806 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 01:25:06.201814 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 01:25:06.201821 kernel: efifb: scrolling: redraw Jan 28 01:25:06.201828 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:25:06.201836 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:25:06.201843 kernel: fb0: EFI VGA frame buffer device Jan 28 01:25:06.201853 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 01:25:06.201860 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 01:25:06.201868 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 28 01:25:06.201875 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 28 01:25:06.201882 kernel: watchdog: Hard watchdog permanently disabled Jan 28 01:25:06.201890 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:25:06.201898 kernel: Segment Routing with IPv6 Jan 28 01:25:06.201905 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:25:06.201913 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:25:06.201921 kernel: Key type dns_resolver registered Jan 28 01:25:06.201928 kernel: registered taskstats version 1 Jan 28 01:25:06.201936 kernel: Loading compiled-in X.509 certificates Jan 28 01:25:06.201944 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 00ce1dc8bc64b61f07099b23b76dee034878817c' Jan 28 01:25:06.201951 kernel: Key type .fscrypt registered Jan 28 01:25:06.201958 kernel: Key type fscrypt-provisioning registered Jan 28 01:25:06.201966 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:25:06.201974 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:25:06.201981 kernel: ima: No architecture policies found Jan 28 01:25:06.201990 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 01:25:06.201998 kernel: clk: Disabling unused clocks Jan 28 01:25:06.202005 kernel: Freeing unused kernel memory: 39424K Jan 28 01:25:06.202013 kernel: Run /init as init process Jan 28 01:25:06.202020 kernel: with arguments: Jan 28 01:25:06.202027 kernel: /init Jan 28 01:25:06.202034 kernel: with environment: Jan 28 01:25:06.202041 kernel: HOME=/ Jan 28 01:25:06.202048 kernel: TERM=linux Jan 28 01:25:06.202057 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:25:06.202069 systemd[1]: Detected virtualization microsoft. Jan 28 01:25:06.202077 systemd[1]: Detected architecture arm64. Jan 28 01:25:06.202085 systemd[1]: Running in initrd. Jan 28 01:25:06.202092 systemd[1]: No hostname configured, using default hostname. Jan 28 01:25:06.202101 systemd[1]: Hostname set to . Jan 28 01:25:06.202109 systemd[1]: Initializing machine ID from random generator. Jan 28 01:25:06.202118 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:25:06.202126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:25:06.202134 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:25:06.202143 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:25:06.202151 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:25:06.202160 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:25:06.202168 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:25:06.202177 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:25:06.202187 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:25:06.202195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:25:06.202203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:25:06.202211 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:25:06.202219 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:25:06.202227 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:25:06.202235 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:25:06.202243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:25:06.202252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:25:06.202260 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:25:06.202268 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:25:06.202276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:25:06.202284 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:25:06.202292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:25:06.202300 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:25:06.202308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:25:06.202317 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:25:06.202325 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:25:06.202333 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:25:06.202341 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:25:06.202349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:25:06.202375 systemd-journald[217]: Collecting audit messages is disabled. Jan 28 01:25:06.202396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:06.202405 systemd-journald[217]: Journal started Jan 28 01:25:06.202424 systemd-journald[217]: Runtime Journal (/run/log/journal/5bf63aa715bb4bd8bea65cacdbda10c3) is 8.0M, max 78.5M, 70.5M free. Jan 28 01:25:06.209947 systemd-modules-load[218]: Inserted module 'overlay' Jan 28 01:25:06.222675 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:25:06.224716 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:25:06.232866 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:25:06.263654 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:25:06.263676 kernel: Bridge firewalling registered Jan 28 01:25:06.255129 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 28 01:25:06.261861 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:25:06.267367 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:25:06.276670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:06.298744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:25:06.313668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:25:06.324525 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:25:06.347644 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:25:06.355509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:25:06.365210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:25:06.370586 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:25:06.384525 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:25:06.409822 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:25:06.423649 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:25:06.437163 dracut-cmdline[251]: dracut-dracut-053 Jan 28 01:25:06.437163 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:25:06.440979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:25:06.515830 kernel: SCSI subsystem initialized Jan 28 01:25:06.488850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:25:06.514598 systemd-resolved[256]: Positive Trust Anchors: Jan 28 01:25:06.538555 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:25:06.514608 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:25:06.555500 kernel: iscsi: registered transport (tcp) Jan 28 01:25:06.514640 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:25:06.522791 systemd-resolved[256]: Defaulting to hostname 'linux'. Jan 28 01:25:06.524451 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:25:06.601078 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:25:06.601099 kernel: QLogic iSCSI HBA Driver Jan 28 01:25:06.531682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:25:06.636938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:25:06.647707 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:25:06.680932 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:25:06.680993 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:25:06.686347 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:25:06.737513 kernel: raid6: neonx8 gen() 15820 MB/s Jan 28 01:25:06.753493 kernel: raid6: neonx4 gen() 15695 MB/s Jan 28 01:25:06.772495 kernel: raid6: neonx2 gen() 13212 MB/s Jan 28 01:25:06.792487 kernel: raid6: neonx1 gen() 10504 MB/s Jan 28 01:25:06.811485 kernel: raid6: int64x8 gen() 6987 MB/s Jan 28 01:25:06.830485 kernel: raid6: int64x4 gen() 7375 MB/s Jan 28 01:25:06.850485 kernel: raid6: int64x2 gen() 6146 MB/s Jan 28 01:25:06.872474 kernel: raid6: int64x1 gen() 5071 MB/s Jan 28 01:25:06.872493 kernel: raid6: using algorithm neonx8 gen() 15820 MB/s Jan 28 01:25:06.895370 kernel: raid6: .... xor() 12045 MB/s, rmw enabled Jan 28 01:25:06.895380 kernel: raid6: using neon recovery algorithm Jan 28 01:25:06.905405 kernel: xor: measuring software checksum speed Jan 28 01:25:06.905421 kernel: 8regs : 19807 MB/sec Jan 28 01:25:06.909485 kernel: 32regs : 19213 MB/sec Jan 28 01:25:06.915506 kernel: arm64_neon : 26180 MB/sec Jan 28 01:25:06.915527 kernel: xor: using function: arm64_neon (26180 MB/sec) Jan 28 01:25:06.964490 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:25:06.974343 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:25:06.989600 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:25:07.009463 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 28 01:25:07.013687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:25:07.035720 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:25:07.053376 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 28 01:25:07.080040 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:25:07.092701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:25:07.129144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:25:07.143806 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:25:07.172692 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:25:07.185449 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:25:07.201501 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:25:07.209831 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:25:07.233710 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:25:07.254188 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 01:25:07.250939 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:25:07.265885 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:25:07.266036 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:25:07.318836 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 01:25:07.318858 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 01:25:07.318868 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 01:25:07.318877 kernel: PTP clock support registered Jan 28 01:25:07.318887 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 01:25:07.318896 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 01:25:07.279471 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:25:07.094011 kernel: hv_vmbus: registering driver hv_utils Jan 28 01:25:07.101359 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 28 01:25:07.101377 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 28 01:25:07.101390 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 01:25:07.101399 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 01:25:07.101407 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 01:25:07.101533 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 01:25:07.101543 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 01:25:07.101550 systemd-journald[217]: Time jumped backwards, rotating. Jan 28 01:25:07.101588 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 01:25:07.306728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:25:07.306947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:07.355048 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:07.086604 systemd-resolved[256]: Clock change detected. Flushing caches. Jan 28 01:25:07.135485 kernel: scsi host0: storvsc_host_t Jan 28 01:25:07.135647 kernel: scsi host1: storvsc_host_t Jan 28 01:25:07.135736 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 01:25:07.113564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:07.133557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:25:07.133657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:07.162512 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 01:25:07.157678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:07.179426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:07.194354 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:25:07.213231 kernel: hv_netvsc 000d3ac4-4b3e-000d-3ac4-4b3e000d3ac4 eth0: VF slot 1 added Jan 28 01:25:07.222505 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 01:25:07.222701 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:25:07.229323 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 01:25:07.245063 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 01:25:07.245277 kernel: hv_vmbus: registering driver hv_pci Jan 28 01:25:07.245289 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 01:25:07.249908 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 01:25:07.250064 kernel: hv_pci 13a01273-9b76-468b-9b7e-23e0cc6c3c0a: PCI VMBus probing: Using version 0x10004 Jan 28 01:25:07.250389 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:25:07.278810 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 01:25:07.279026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:25:07.279140 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 01:25:07.290438 kernel: hv_pci 13a01273-9b76-468b-9b7e-23e0cc6c3c0a: PCI host bridge to bus 9b76:00 Jan 28 01:25:07.290619 kernel: pci_bus 9b76:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 01:25:07.295634 kernel: pci_bus 9b76:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 01:25:07.295762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:07.303172 kernel: pci 9b76:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 28 01:25:07.315582 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 01:25:07.322243 kernel: pci 9b76:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:25:07.336162 kernel: pci 9b76:00:02.0: enabling Extended Tags Jan 28 01:25:07.336240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:25:07.354262 kernel: pci 9b76:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9b76:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 28 01:25:07.363822 kernel: pci_bus 9b76:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 01:25:07.364090 kernel: pci 9b76:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:25:07.402823 kernel: mlx5_core 9b76:00:02.0: enabling device (0000 -> 0002) Jan 28 01:25:07.409166 kernel: mlx5_core 9b76:00:02.0: firmware version: 16.30.5026 Jan 28 01:25:07.603160 kernel: hv_netvsc 000d3ac4-4b3e-000d-3ac4-4b3e000d3ac4 eth0: VF registering: eth1 Jan 28 01:25:07.603348 kernel: mlx5_core 9b76:00:02.0 eth1: joined to eth0 Jan 28 01:25:07.613259 kernel: mlx5_core 9b76:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 01:25:07.622163 kernel: mlx5_core 9b76:00:02.0 enP39798s1: renamed from eth1 Jan 28 01:25:07.850174 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (482) Jan 28 01:25:07.863380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 01:25:07.910472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 01:25:07.937026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 01:25:07.977171 kernel: BTRFS: device fsid 0fc26676-8036-4cd5-8c30-2943afb25b0b devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (484) Jan 28 01:25:07.989938 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 28 01:25:07.995697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 01:25:08.026312 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:25:08.050272 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:08.058195 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:08.068161 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:09.069157 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:09.070282 disk-uuid[604]: The operation has completed successfully. Jan 28 01:25:09.137301 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:25:09.139163 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:25:09.169263 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:25:09.179668 sh[717]: Success Jan 28 01:25:09.217232 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 28 01:25:09.522993 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:25:09.527918 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:25:09.540281 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:25:09.567916 kernel: BTRFS info (device dm-0): first mount of filesystem 0fc26676-8036-4cd5-8c30-2943afb25b0b Jan 28 01:25:09.567962 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:25:09.573646 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:25:09.577852 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:25:09.581406 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:25:09.882769 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:25:09.887799 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:25:09.909429 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:25:09.916311 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:25:09.961444 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:09.961501 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:25:09.965325 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:25:10.002285 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:25:10.011184 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:25:10.022790 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:10.026334 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:25:10.038901 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:25:10.064426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:25:10.073306 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:25:10.103352 systemd-networkd[901]: lo: Link UP Jan 28 01:25:10.103361 systemd-networkd[901]: lo: Gained carrier Jan 28 01:25:10.104864 systemd-networkd[901]: Enumeration completed Jan 28 01:25:10.104959 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:25:10.111746 systemd[1]: Reached target network.target - Network. Jan 28 01:25:10.115178 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:25:10.115181 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:25:10.195164 kernel: mlx5_core 9b76:00:02.0 enP39798s1: Link up Jan 28 01:25:10.233727 kernel: hv_netvsc 000d3ac4-4b3e-000d-3ac4-4b3e000d3ac4 eth0: Data path switched to VF: enP39798s1 Jan 28 01:25:10.233327 systemd-networkd[901]: enP39798s1: Link UP Jan 28 01:25:10.233407 systemd-networkd[901]: eth0: Link UP Jan 28 01:25:10.233535 systemd-networkd[901]: eth0: Gained carrier Jan 28 01:25:10.233543 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:25:10.245338 systemd-networkd[901]: enP39798s1: Gained carrier Jan 28 01:25:10.271177 systemd-networkd[901]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:25:10.939900 ignition[900]: Ignition 2.19.0 Jan 28 01:25:10.939909 ignition[900]: Stage: fetch-offline Jan 28 01:25:10.944380 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:25:10.939943 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:10.939951 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:10.940048 ignition[900]: parsed url from cmdline: "" Jan 28 01:25:10.940051 ignition[900]: no config URL provided Jan 28 01:25:10.940055 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:25:10.966424 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 01:25:10.940062 ignition[900]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:25:10.940066 ignition[900]: failed to fetch config: resource requires networking Jan 28 01:25:10.940468 ignition[900]: Ignition finished successfully Jan 28 01:25:10.985984 ignition[909]: Ignition 2.19.0 Jan 28 01:25:10.985994 ignition[909]: Stage: fetch Jan 28 01:25:10.986194 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:10.986203 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:10.986306 ignition[909]: parsed url from cmdline: "" Jan 28 01:25:10.986310 ignition[909]: no config URL provided Jan 28 01:25:10.986314 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:25:10.986321 ignition[909]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:25:10.986340 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 01:25:11.102553 ignition[909]: GET result: OK Jan 28 01:25:11.102654 ignition[909]: config has been read from IMDS userdata Jan 28 01:25:11.102739 ignition[909]: parsing config with SHA512: 4cd6d8674837b4340e31d8ec13d7434e8390db0d3e889936ee539027593ef1d0809e4c3149f70e26b690fb91006ded64e1be4d99746c8ffde7a1cfddccb77f46 Jan 28 01:25:11.109627 unknown[909]: fetched base config from "system" Jan 28 01:25:11.110026 ignition[909]: fetch: fetch complete Jan 28 01:25:11.109633 unknown[909]: fetched base config from "system" Jan 28 01:25:11.110031 ignition[909]: fetch: fetch passed Jan 28 01:25:11.109638 unknown[909]: fetched user config from "azure" Jan 28 01:25:11.110072 ignition[909]: Ignition finished successfully Jan 28 01:25:11.114064 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 01:25:11.138296 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:25:11.153837 ignition[916]: Ignition 2.19.0 Jan 28 01:25:11.153846 ignition[916]: Stage: kargs Jan 28 01:25:11.157017 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:11.161194 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:25:11.157036 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:11.158051 ignition[916]: kargs: kargs passed Jan 28 01:25:11.158097 ignition[916]: Ignition finished successfully Jan 28 01:25:11.185649 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:25:11.200574 ignition[922]: Ignition 2.19.0 Jan 28 01:25:11.200584 ignition[922]: Stage: disks Jan 28 01:25:11.204905 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:25:11.200754 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:11.212679 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:25:11.200769 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:11.222464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:25:11.201764 ignition[922]: disks: disks passed Jan 28 01:25:11.232166 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:25:11.201810 ignition[922]: Ignition finished successfully Jan 28 01:25:11.241994 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:25:11.251906 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:25:11.274408 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:25:11.338883 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 28 01:25:11.346134 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:25:11.363877 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:25:11.418172 kernel: EXT4-fs (sda9): mounted filesystem 2c7419f5-3bc3-4c5f-b132-f03585db88cd r/w with ordered data mode. Quota mode: none. Jan 28 01:25:11.418408 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:25:11.422828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:25:11.463225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:25:11.493413 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Jan 28 01:25:11.493459 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:11.498358 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:25:11.502119 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:25:11.509167 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:25:11.509360 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:25:11.515346 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 01:25:11.523206 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:25:11.523242 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:25:11.529526 systemd-networkd[901]: eth0: Gained IPv6LL Jan 28 01:25:11.547273 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:25:11.553836 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:25:11.573295 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:25:12.142233 coreos-metadata[958]: Jan 28 01:25:12.142 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:25:12.149444 coreos-metadata[958]: Jan 28 01:25:12.149 INFO Fetch successful Jan 28 01:25:12.149444 coreos-metadata[958]: Jan 28 01:25:12.149 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:25:12.164008 coreos-metadata[958]: Jan 28 01:25:12.160 INFO Fetch successful Jan 28 01:25:12.203238 coreos-metadata[958]: Jan 28 01:25:12.203 INFO wrote hostname ci-4081.3.6-n-11aaf12d54 to /sysroot/etc/hostname Jan 28 01:25:12.211466 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:25:12.359343 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:25:12.394271 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:25:12.416589 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:25:12.439512 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:25:13.467339 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:25:13.478512 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:25:13.486305 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:25:13.504367 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:13.504718 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:25:13.528548 ignition[1059]: INFO : Ignition 2.19.0 Jan 28 01:25:13.528548 ignition[1059]: INFO : Stage: mount Jan 28 01:25:13.535401 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:13.535401 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:13.535401 ignition[1059]: INFO : mount: mount passed Jan 28 01:25:13.535401 ignition[1059]: INFO : Ignition finished successfully Jan 28 01:25:13.539861 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:25:13.561246 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:25:13.572834 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:25:13.590718 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:25:13.613175 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Jan 28 01:25:13.625161 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:13.625206 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:25:13.629525 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:25:13.637177 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:25:13.638487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:25:13.664618 ignition[1090]: INFO : Ignition 2.19.0 Jan 28 01:25:13.664618 ignition[1090]: INFO : Stage: files Jan 28 01:25:13.671522 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:13.671522 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:13.671522 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:25:13.671522 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:25:13.671522 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:25:13.743262 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:25:13.749579 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:25:13.749579 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:25:13.743631 unknown[1090]: wrote ssh authorized keys file for user: core Jan 28 01:25:13.773573 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:25:13.782269 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:25:13.782269 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 01:25:13.782269 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 28 01:25:13.825468 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 01:25:13.972418 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 28 01:25:14.553069 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 01:25:15.096086 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:25:15.096086 ignition[1090]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 28 01:25:15.116694 ignition[1090]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: files passed Jan 28 01:25:15.128435 ignition[1090]: INFO : Ignition finished successfully Jan 28 01:25:15.128502 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:25:15.165406 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:25:15.183302 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:25:15.280692 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:25:15.197505 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:25:15.293492 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:25:15.293492 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:25:15.197600 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:25:15.232862 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:25:15.240330 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:25:15.256379 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:25:15.295723 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:25:15.295836 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:25:15.308027 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:25:15.321548 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:25:15.332616 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:25:15.350414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:25:15.375554 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:25:15.394461 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:25:15.415132 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:25:15.424276 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:25:15.434520 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:25:15.444029 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:25:15.444100 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:25:15.457274 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:25:15.467230 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:25:15.475908 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:25:15.485362 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:25:15.495240 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:25:15.505254 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:25:15.514604 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:25:15.524425 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:25:15.535534 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:25:15.544759 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:25:15.553825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:25:15.553888 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:25:15.567469 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:25:15.572950 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:25:15.583616 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:25:15.588318 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:25:15.594794 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:25:15.594855 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:25:15.610697 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:25:15.610739 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:25:15.617314 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:25:15.617352 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:25:15.629624 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 01:25:15.629678 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:25:15.660343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:25:15.704245 ignition[1141]: INFO : Ignition 2.19.0 Jan 28 01:25:15.704245 ignition[1141]: INFO : Stage: umount Jan 28 01:25:15.704245 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:15.704245 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:15.704245 ignition[1141]: INFO : umount: umount passed Jan 28 01:25:15.704245 ignition[1141]: INFO : Ignition finished successfully Jan 28 01:25:15.673327 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:25:15.673412 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:25:15.704286 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:25:15.708517 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:25:15.708580 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:25:15.714446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:25:15.714494 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:25:15.728286 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:25:15.728371 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:25:15.744194 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:25:15.744722 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:25:15.747183 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:25:15.756738 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:25:15.756870 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:25:15.768365 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:25:15.768427 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:25:15.777161 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 01:25:15.777207 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 01:25:15.786429 systemd[1]: Stopped target network.target - Network. Jan 28 01:25:15.795638 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:25:15.795743 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:25:15.811164 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:25:15.819130 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:25:15.828212 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:25:15.838454 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:25:15.848111 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:25:15.852460 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:25:15.852510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:25:15.861358 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:25:15.861398 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:25:15.869546 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:25:15.869591 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:25:15.878034 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:25:15.878070 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:25:15.886737 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:25:15.895034 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:25:15.905381 systemd-networkd[901]: eth0: DHCPv6 lease lost Jan 28 01:25:15.906758 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:25:15.906859 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:25:15.919836 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:25:16.102279 kernel: hv_netvsc 000d3ac4-4b3e-000d-3ac4-4b3e000d3ac4 eth0: Data path switched from VF: enP39798s1 Jan 28 01:25:15.919950 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:25:15.931582 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:25:15.931627 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:25:15.954335 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:25:15.964962 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:25:15.965037 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:25:15.974154 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:25:15.974196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:25:15.982454 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:25:15.982490 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:25:15.990672 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:25:15.990709 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:25:16.001245 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:25:16.036545 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:25:16.036704 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:25:16.047958 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:25:16.048042 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:25:16.059904 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:25:16.059938 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:25:16.064754 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:25:16.064801 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:25:16.077447 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:25:16.077494 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:25:16.086350 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:25:16.086392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:25:16.116592 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:25:16.126823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:25:16.126901 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:25:16.133118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:25:16.133174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:16.143685 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:25:16.145173 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:25:16.153687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:25:16.153774 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:25:16.168050 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:25:16.168891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:25:16.209560 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:25:16.209716 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:25:16.218617 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:25:16.247404 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:25:16.461913 systemd[1]: Switching root. Jan 28 01:25:16.492054 systemd-journald[217]: Journal stopped Jan 28 01:25:06.198004 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 28 01:25:06.198028 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Jan 27 23:05:14 -00 2026 Jan 28 01:25:06.198036 kernel: KASLR enabled Jan 28 01:25:06.198042 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 01:25:06.198049 kernel: printk: bootconsole [pl11] enabled Jan 28 01:25:06.198055 kernel: efi: EFI v2.7 by EDK II Jan 28 01:25:06.198062 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8a98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 28 01:25:06.198068 kernel: random: crng init done Jan 28 01:25:06.198075 kernel: ACPI: Early table checksum verification disabled Jan 28 01:25:06.198081 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 28 01:25:06.198087 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198092 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198100 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 01:25:06.198106 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198113 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198119 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198126 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198134 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198140 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198146 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 01:25:06.198153 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:25:06.198159 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 01:25:06.198166 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 28 01:25:06.198172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 28 01:25:06.198178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 28 01:25:06.198184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 28 01:25:06.198191 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 28 01:25:06.198197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 28 01:25:06.198205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 28 01:25:06.198211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 28 01:25:06.198218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 28 01:25:06.198224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 28 01:25:06.198230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 28 01:25:06.198237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 28 01:25:06.198243 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 28 01:25:06.198249 kernel: Zone ranges: Jan 28 01:25:06.198256 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 01:25:06.198262 kernel: DMA32 empty Jan 28 01:25:06.198268 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:25:06.198275 kernel: Movable zone start for each node Jan 28 01:25:06.198285 kernel: Early memory node ranges Jan 28 01:25:06.198292 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 01:25:06.198299 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 28 01:25:06.198305 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 28 01:25:06.198313 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 28 01:25:06.198321 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 28 01:25:06.198328 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 28 01:25:06.198335 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 01:25:06.198341 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 01:25:06.198348 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 01:25:06.198355 kernel: psci: probing for conduit method from ACPI. Jan 28 01:25:06.198362 kernel: psci: PSCIv1.1 detected in firmware. Jan 28 01:25:06.198368 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 01:25:06.198375 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 01:25:06.198382 kernel: psci: SMC Calling Convention v1.4 Jan 28 01:25:06.198388 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 01:25:06.198395 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 01:25:06.198403 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 28 01:25:06.198410 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 28 01:25:06.198417 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 01:25:06.198423 kernel: Detected PIPT I-cache on CPU0 Jan 28 01:25:06.198430 kernel: CPU features: detected: GIC system register CPU interface Jan 28 01:25:06.198437 kernel: CPU features: detected: Hardware dirty bit management Jan 28 01:25:06.198444 kernel: CPU features: detected: Spectre-BHB Jan 28 01:25:06.198450 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 01:25:06.198457 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 01:25:06.198464 kernel: CPU features: detected: ARM erratum 1418040 Jan 28 01:25:06.198471 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 28 01:25:06.200504 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 01:25:06.200527 kernel: alternatives: applying boot alternatives Jan 28 01:25:06.200536 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:25:06.200545 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:25:06.200552 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:25:06.200559 kernel: Fallback order for Node 0: 0 Jan 28 01:25:06.200566 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 28 01:25:06.200572 kernel: Policy zone: Normal Jan 28 01:25:06.200579 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:25:06.200586 kernel: software IO TLB: area num 2. Jan 28 01:25:06.200593 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 28 01:25:06.200605 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 28 01:25:06.200612 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 01:25:06.200619 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:25:06.200626 kernel: rcu: RCU event tracing is enabled. Jan 28 01:25:06.200633 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 01:25:06.200640 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:25:06.200647 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:25:06.200654 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:25:06.200660 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 01:25:06.200667 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 01:25:06.200674 kernel: GICv3: 960 SPIs implemented Jan 28 01:25:06.200682 kernel: GICv3: 0 Extended SPIs implemented Jan 28 01:25:06.200690 kernel: Root IRQ handler: gic_handle_irq Jan 28 01:25:06.200696 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 01:25:06.200703 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 01:25:06.200710 kernel: ITS: No ITS available, not enabling LPIs Jan 28 01:25:06.200717 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:25:06.200724 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:25:06.200731 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 28 01:25:06.200738 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 28 01:25:06.200744 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 28 01:25:06.200751 kernel: Console: colour dummy device 80x25 Jan 28 01:25:06.200760 kernel: printk: console [tty1] enabled Jan 28 01:25:06.200768 kernel: ACPI: Core revision 20230628 Jan 28 01:25:06.200775 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 28 01:25:06.200782 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:25:06.200789 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:25:06.200796 kernel: landlock: Up and running. Jan 28 01:25:06.200803 kernel: SELinux: Initializing. Jan 28 01:25:06.200810 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:25:06.200817 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:25:06.200826 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:25:06.200833 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:25:06.200840 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 28 01:25:06.200847 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 28 01:25:06.200854 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 01:25:06.200861 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:25:06.200868 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:25:06.200875 kernel: Remapping and enabling EFI services. Jan 28 01:25:06.200889 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:25:06.200896 kernel: Detected PIPT I-cache on CPU1 Jan 28 01:25:06.200903 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 01:25:06.200911 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 01:25:06.200920 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 28 01:25:06.200927 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 01:25:06.200934 kernel: SMP: Total of 2 processors activated. Jan 28 01:25:06.200942 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 01:25:06.200950 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 01:25:06.200959 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 01:25:06.200966 kernel: CPU features: detected: CRC32 instructions Jan 28 01:25:06.200974 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 01:25:06.200981 kernel: CPU features: detected: LSE atomic instructions Jan 28 01:25:06.200988 kernel: CPU features: detected: Privileged Access Never Jan 28 01:25:06.200996 kernel: CPU: All CPU(s) started at EL1 Jan 28 01:25:06.201003 kernel: alternatives: applying system-wide alternatives Jan 28 01:25:06.201010 kernel: devtmpfs: initialized Jan 28 01:25:06.201018 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:25:06.201027 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 01:25:06.201035 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:25:06.201042 kernel: SMBIOS 3.1.0 present. Jan 28 01:25:06.201049 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 28 01:25:06.201057 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:25:06.201064 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 01:25:06.201071 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 01:25:06.201079 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 01:25:06.201086 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:25:06.201095 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 28 01:25:06.201103 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:25:06.201110 kernel: cpuidle: using governor menu Jan 28 01:25:06.201118 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 01:25:06.201125 kernel: ASID allocator initialised with 32768 entries Jan 28 01:25:06.201133 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:25:06.201140 kernel: Serial: AMBA PL011 UART driver Jan 28 01:25:06.201147 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 01:25:06.201155 kernel: Modules: 0 pages in range for non-PLT usage Jan 28 01:25:06.201164 kernel: Modules: 509008 pages in range for PLT usage Jan 28 01:25:06.201171 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:25:06.201178 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:25:06.201186 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 01:25:06.201193 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 01:25:06.201201 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:25:06.201208 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:25:06.201215 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 01:25:06.201223 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 01:25:06.201232 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:25:06.201241 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:25:06.201248 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:25:06.201255 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:25:06.201263 kernel: ACPI: Interpreter enabled Jan 28 01:25:06.201270 kernel: ACPI: Using GIC for interrupt routing Jan 28 01:25:06.201278 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 01:25:06.201285 kernel: printk: console [ttyAMA0] enabled Jan 28 01:25:06.201292 kernel: printk: bootconsole [pl11] disabled Jan 28 01:25:06.201301 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 01:25:06.201309 kernel: iommu: Default domain type: Translated Jan 28 01:25:06.201316 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 01:25:06.201323 kernel: efivars: Registered efivars operations Jan 28 01:25:06.201330 kernel: vgaarb: loaded Jan 28 01:25:06.201338 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 01:25:06.201345 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:25:06.201353 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:25:06.201360 kernel: pnp: PnP ACPI init Jan 28 01:25:06.201369 kernel: pnp: PnP ACPI: found 0 devices Jan 28 01:25:06.201376 kernel: NET: Registered PF_INET protocol family Jan 28 01:25:06.201384 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:25:06.201391 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:25:06.201399 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:25:06.201406 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:25:06.201414 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:25:06.201421 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:25:06.201429 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:25:06.201438 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:25:06.201445 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:25:06.201453 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:25:06.201460 kernel: kvm [1]: HYP mode not available Jan 28 01:25:06.201467 kernel: Initialise system trusted keyrings Jan 28 01:25:06.201475 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:25:06.201490 kernel: Key type asymmetric registered Jan 28 01:25:06.201497 kernel: Asymmetric key parser 'x509' registered Jan 28 01:25:06.201505 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:25:06.201515 kernel: io scheduler mq-deadline registered Jan 28 01:25:06.201522 kernel: io scheduler kyber registered Jan 28 01:25:06.201529 kernel: io scheduler bfq registered Jan 28 01:25:06.201537 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:25:06.201544 kernel: thunder_xcv, ver 1.0 Jan 28 01:25:06.201551 kernel: thunder_bgx, ver 1.0 Jan 28 01:25:06.201558 kernel: nicpf, ver 1.0 Jan 28 01:25:06.201566 kernel: nicvf, ver 1.0 Jan 28 01:25:06.201709 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 01:25:06.201788 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T01:25:05 UTC (1769563505) Jan 28 01:25:06.201799 kernel: efifb: probing for efifb Jan 28 01:25:06.201806 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 01:25:06.201814 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 01:25:06.201821 kernel: efifb: scrolling: redraw Jan 28 01:25:06.201828 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:25:06.201836 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:25:06.201843 kernel: fb0: EFI VGA frame buffer device Jan 28 01:25:06.201853 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 01:25:06.201860 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 01:25:06.201868 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 28 01:25:06.201875 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 28 01:25:06.201882 kernel: watchdog: Hard watchdog permanently disabled Jan 28 01:25:06.201890 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:25:06.201898 kernel: Segment Routing with IPv6 Jan 28 01:25:06.201905 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:25:06.201913 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:25:06.201921 kernel: Key type dns_resolver registered Jan 28 01:25:06.201928 kernel: registered taskstats version 1 Jan 28 01:25:06.201936 kernel: Loading compiled-in X.509 certificates Jan 28 01:25:06.201944 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 00ce1dc8bc64b61f07099b23b76dee034878817c' Jan 28 01:25:06.201951 kernel: Key type .fscrypt registered Jan 28 01:25:06.201958 kernel: Key type fscrypt-provisioning registered Jan 28 01:25:06.201966 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:25:06.201974 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:25:06.201981 kernel: ima: No architecture policies found Jan 28 01:25:06.201990 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 01:25:06.201998 kernel: clk: Disabling unused clocks Jan 28 01:25:06.202005 kernel: Freeing unused kernel memory: 39424K Jan 28 01:25:06.202013 kernel: Run /init as init process Jan 28 01:25:06.202020 kernel: with arguments: Jan 28 01:25:06.202027 kernel: /init Jan 28 01:25:06.202034 kernel: with environment: Jan 28 01:25:06.202041 kernel: HOME=/ Jan 28 01:25:06.202048 kernel: TERM=linux Jan 28 01:25:06.202057 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:25:06.202069 systemd[1]: Detected virtualization microsoft. Jan 28 01:25:06.202077 systemd[1]: Detected architecture arm64. Jan 28 01:25:06.202085 systemd[1]: Running in initrd. Jan 28 01:25:06.202092 systemd[1]: No hostname configured, using default hostname. Jan 28 01:25:06.202101 systemd[1]: Hostname set to . Jan 28 01:25:06.202109 systemd[1]: Initializing machine ID from random generator. Jan 28 01:25:06.202118 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:25:06.202126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:25:06.202134 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:25:06.202143 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:25:06.202151 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:25:06.202160 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:25:06.202168 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:25:06.202177 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:25:06.202187 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:25:06.202195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:25:06.202203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:25:06.202211 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:25:06.202219 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:25:06.202227 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:25:06.202235 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:25:06.202243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:25:06.202252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:25:06.202260 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:25:06.202268 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:25:06.202276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:25:06.202284 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:25:06.202292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:25:06.202300 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:25:06.202308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:25:06.202317 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:25:06.202325 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:25:06.202333 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:25:06.202341 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:25:06.202349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:25:06.202375 systemd-journald[217]: Collecting audit messages is disabled. Jan 28 01:25:06.202396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:06.202405 systemd-journald[217]: Journal started Jan 28 01:25:06.202424 systemd-journald[217]: Runtime Journal (/run/log/journal/5bf63aa715bb4bd8bea65cacdbda10c3) is 8.0M, max 78.5M, 70.5M free. Jan 28 01:25:06.209947 systemd-modules-load[218]: Inserted module 'overlay' Jan 28 01:25:06.222675 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:25:06.224716 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:25:06.232866 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:25:06.263654 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:25:06.263676 kernel: Bridge firewalling registered Jan 28 01:25:06.255129 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 28 01:25:06.261861 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:25:06.267367 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:25:06.276670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:06.298744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:25:06.313668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:25:06.324525 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:25:06.347644 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:25:06.355509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:25:06.365210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:25:06.370586 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:25:06.384525 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:25:06.409822 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:25:06.423649 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:25:06.437163 dracut-cmdline[251]: dracut-dracut-053 Jan 28 01:25:06.437163 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e7a8cac0a248eeeb18f7bcbd95b9dbb1e3415729dc1af128dd9f394f73832ecf Jan 28 01:25:06.440979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:25:06.515830 kernel: SCSI subsystem initialized Jan 28 01:25:06.488850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:25:06.514598 systemd-resolved[256]: Positive Trust Anchors: Jan 28 01:25:06.538555 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:25:06.514608 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:25:06.555500 kernel: iscsi: registered transport (tcp) Jan 28 01:25:06.514640 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:25:06.522791 systemd-resolved[256]: Defaulting to hostname 'linux'. Jan 28 01:25:06.524451 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:25:06.601078 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:25:06.601099 kernel: QLogic iSCSI HBA Driver Jan 28 01:25:06.531682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:25:06.636938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:25:06.647707 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:25:06.680932 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:25:06.680993 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:25:06.686347 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:25:06.737513 kernel: raid6: neonx8 gen() 15820 MB/s Jan 28 01:25:06.753493 kernel: raid6: neonx4 gen() 15695 MB/s Jan 28 01:25:06.772495 kernel: raid6: neonx2 gen() 13212 MB/s Jan 28 01:25:06.792487 kernel: raid6: neonx1 gen() 10504 MB/s Jan 28 01:25:06.811485 kernel: raid6: int64x8 gen() 6987 MB/s Jan 28 01:25:06.830485 kernel: raid6: int64x4 gen() 7375 MB/s Jan 28 01:25:06.850485 kernel: raid6: int64x2 gen() 6146 MB/s Jan 28 01:25:06.872474 kernel: raid6: int64x1 gen() 5071 MB/s Jan 28 01:25:06.872493 kernel: raid6: using algorithm neonx8 gen() 15820 MB/s Jan 28 01:25:06.895370 kernel: raid6: .... xor() 12045 MB/s, rmw enabled Jan 28 01:25:06.895380 kernel: raid6: using neon recovery algorithm Jan 28 01:25:06.905405 kernel: xor: measuring software checksum speed Jan 28 01:25:06.905421 kernel: 8regs : 19807 MB/sec Jan 28 01:25:06.909485 kernel: 32regs : 19213 MB/sec Jan 28 01:25:06.915506 kernel: arm64_neon : 26180 MB/sec Jan 28 01:25:06.915527 kernel: xor: using function: arm64_neon (26180 MB/sec) Jan 28 01:25:06.964490 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:25:06.974343 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:25:06.989600 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:25:07.009463 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 28 01:25:07.013687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:25:07.035720 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:25:07.053376 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 28 01:25:07.080040 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:25:07.092701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:25:07.129144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:25:07.143806 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:25:07.172692 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:25:07.185449 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:25:07.201501 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:25:07.209831 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:25:07.233710 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:25:07.254188 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 01:25:07.250939 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:25:07.265885 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:25:07.266036 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:25:07.318836 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 01:25:07.318858 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 01:25:07.318868 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 01:25:07.318877 kernel: PTP clock support registered Jan 28 01:25:07.318887 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 01:25:07.318896 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 01:25:07.279471 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:25:07.094011 kernel: hv_vmbus: registering driver hv_utils Jan 28 01:25:07.101359 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 28 01:25:07.101377 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 28 01:25:07.101390 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 01:25:07.101399 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 01:25:07.101407 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 01:25:07.101533 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 01:25:07.101543 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 01:25:07.101550 systemd-journald[217]: Time jumped backwards, rotating. Jan 28 01:25:07.101588 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 01:25:07.306728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:25:07.306947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:07.355048 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:07.086604 systemd-resolved[256]: Clock change detected. Flushing caches. Jan 28 01:25:07.135485 kernel: scsi host0: storvsc_host_t Jan 28 01:25:07.135647 kernel: scsi host1: storvsc_host_t Jan 28 01:25:07.135736 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 01:25:07.113564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:07.133557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:25:07.133657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:07.162512 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 01:25:07.157678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:07.179426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:07.194354 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:25:07.213231 kernel: hv_netvsc 000d3ac4-4b3e-000d-3ac4-4b3e000d3ac4 eth0: VF slot 1 added Jan 28 01:25:07.222505 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 01:25:07.222701 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:25:07.229323 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 01:25:07.245063 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 01:25:07.245277 kernel: hv_vmbus: registering driver hv_pci Jan 28 01:25:07.245289 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 01:25:07.249908 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 01:25:07.250064 kernel: hv_pci 13a01273-9b76-468b-9b7e-23e0cc6c3c0a: PCI VMBus probing: Using version 0x10004 Jan 28 01:25:07.250389 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:25:07.278810 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 01:25:07.279026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:25:07.279140 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 01:25:07.290438 kernel: hv_pci 13a01273-9b76-468b-9b7e-23e0cc6c3c0a: PCI host bridge to bus 9b76:00 Jan 28 01:25:07.290619 kernel: pci_bus 9b76:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 01:25:07.295634 kernel: pci_bus 9b76:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 01:25:07.295762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:07.303172 kernel: pci 9b76:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 28 01:25:07.315582 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 01:25:07.322243 kernel: pci 9b76:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:25:07.336162 kernel: pci 9b76:00:02.0: enabling Extended Tags Jan 28 01:25:07.336240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#267 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:25:07.354262 kernel: pci 9b76:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9b76:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 28 01:25:07.363822 kernel: pci_bus 9b76:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 01:25:07.364090 kernel: pci 9b76:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 01:25:07.402823 kernel: mlx5_core 9b76:00:02.0: enabling device (0000 -> 0002) Jan 28 01:25:07.409166 kernel: mlx5_core 9b76:00:02.0: firmware version: 16.30.5026 Jan 28 01:25:07.603160 kernel: hv_netvsc 000d3ac4-4b3e-000d-3ac4-4b3e000d3ac4 eth0: VF registering: eth1 Jan 28 01:25:07.603348 kernel: mlx5_core 9b76:00:02.0 eth1: joined to eth0 Jan 28 01:25:07.613259 kernel: mlx5_core 9b76:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 01:25:07.622163 kernel: mlx5_core 9b76:00:02.0 enP39798s1: renamed from eth1 Jan 28 01:25:07.850174 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (482) Jan 28 01:25:07.863380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 01:25:07.910472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 01:25:07.937026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 01:25:07.977171 kernel: BTRFS: device fsid 0fc26676-8036-4cd5-8c30-2943afb25b0b devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (484) Jan 28 01:25:07.989938 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 28 01:25:07.995697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 01:25:08.026312 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:25:08.050272 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:08.058195 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:08.068161 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:09.069157 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 01:25:09.070282 disk-uuid[604]: The operation has completed successfully. Jan 28 01:25:09.137301 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:25:09.139163 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:25:09.169263 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:25:09.179668 sh[717]: Success Jan 28 01:25:09.217232 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 28 01:25:09.522993 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:25:09.527918 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:25:09.540281 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:25:09.567916 kernel: BTRFS info (device dm-0): first mount of filesystem 0fc26676-8036-4cd5-8c30-2943afb25b0b Jan 28 01:25:09.567962 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:25:09.573646 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:25:09.577852 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:25:09.581406 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:25:09.882769 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:25:09.887799 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:25:09.909429 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:25:09.916311 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:25:09.961444 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:09.961501 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:25:09.965325 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:25:10.002285 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:25:10.011184 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:25:10.022790 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:10.026334 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:25:10.038901 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:25:10.064426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:25:10.073306 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:25:10.103352 systemd-networkd[901]: lo: Link UP Jan 28 01:25:10.103361 systemd-networkd[901]: lo: Gained carrier Jan 28 01:25:10.104864 systemd-networkd[901]: Enumeration completed Jan 28 01:25:10.104959 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:25:10.111746 systemd[1]: Reached target network.target - Network. Jan 28 01:25:10.115178 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:25:10.115181 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:25:10.195164 kernel: mlx5_core 9b76:00:02.0 enP39798s1: Link up Jan 28 01:25:10.233727 kernel: hv_netvsc 000d3ac4-4b3e-000d-3ac4-4b3e000d3ac4 eth0: Data path switched to VF: enP39798s1 Jan 28 01:25:10.233327 systemd-networkd[901]: enP39798s1: Link UP Jan 28 01:25:10.233407 systemd-networkd[901]: eth0: Link UP Jan 28 01:25:10.233535 systemd-networkd[901]: eth0: Gained carrier Jan 28 01:25:10.233543 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:25:10.245338 systemd-networkd[901]: enP39798s1: Gained carrier Jan 28 01:25:10.271177 systemd-networkd[901]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:25:10.939900 ignition[900]: Ignition 2.19.0 Jan 28 01:25:10.939909 ignition[900]: Stage: fetch-offline Jan 28 01:25:10.944380 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:25:10.939943 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:10.939951 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:10.940048 ignition[900]: parsed url from cmdline: "" Jan 28 01:25:10.940051 ignition[900]: no config URL provided Jan 28 01:25:10.940055 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:25:10.966424 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 01:25:10.940062 ignition[900]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:25:10.940066 ignition[900]: failed to fetch config: resource requires networking Jan 28 01:25:10.940468 ignition[900]: Ignition finished successfully Jan 28 01:25:10.985984 ignition[909]: Ignition 2.19.0 Jan 28 01:25:10.985994 ignition[909]: Stage: fetch Jan 28 01:25:10.986194 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:10.986203 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:10.986306 ignition[909]: parsed url from cmdline: "" Jan 28 01:25:10.986310 ignition[909]: no config URL provided Jan 28 01:25:10.986314 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:25:10.986321 ignition[909]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:25:10.986340 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 01:25:11.102553 ignition[909]: GET result: OK Jan 28 01:25:11.102654 ignition[909]: config has been read from IMDS userdata Jan 28 01:25:11.102739 ignition[909]: parsing config with SHA512: 4cd6d8674837b4340e31d8ec13d7434e8390db0d3e889936ee539027593ef1d0809e4c3149f70e26b690fb91006ded64e1be4d99746c8ffde7a1cfddccb77f46 Jan 28 01:25:11.109627 unknown[909]: fetched base config from "system" Jan 28 01:25:11.110026 ignition[909]: fetch: fetch complete Jan 28 01:25:11.109633 unknown[909]: fetched base config from "system" Jan 28 01:25:11.110031 ignition[909]: fetch: fetch passed Jan 28 01:25:11.109638 unknown[909]: fetched user config from "azure" Jan 28 01:25:11.110072 ignition[909]: Ignition finished successfully Jan 28 01:25:11.114064 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 01:25:11.138296 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:25:11.153837 ignition[916]: Ignition 2.19.0 Jan 28 01:25:11.153846 ignition[916]: Stage: kargs Jan 28 01:25:11.157017 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:11.161194 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:25:11.157036 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:11.158051 ignition[916]: kargs: kargs passed Jan 28 01:25:11.158097 ignition[916]: Ignition finished successfully Jan 28 01:25:11.185649 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:25:11.200574 ignition[922]: Ignition 2.19.0 Jan 28 01:25:11.200584 ignition[922]: Stage: disks Jan 28 01:25:11.204905 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:25:11.200754 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:11.212679 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:25:11.200769 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:11.222464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:25:11.201764 ignition[922]: disks: disks passed Jan 28 01:25:11.232166 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:25:11.201810 ignition[922]: Ignition finished successfully Jan 28 01:25:11.241994 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:25:11.251906 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:25:11.274408 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:25:11.338883 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 28 01:25:11.346134 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:25:11.363877 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:25:11.418172 kernel: EXT4-fs (sda9): mounted filesystem 2c7419f5-3bc3-4c5f-b132-f03585db88cd r/w with ordered data mode. Quota mode: none. Jan 28 01:25:11.418408 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:25:11.422828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:25:11.463225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:25:11.493413 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Jan 28 01:25:11.493459 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:11.498358 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:25:11.502119 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:25:11.509167 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:25:11.509360 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:25:11.515346 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 01:25:11.523206 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:25:11.523242 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:25:11.529526 systemd-networkd[901]: eth0: Gained IPv6LL Jan 28 01:25:11.547273 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:25:11.553836 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:25:11.573295 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:25:12.142233 coreos-metadata[958]: Jan 28 01:25:12.142 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:25:12.149444 coreos-metadata[958]: Jan 28 01:25:12.149 INFO Fetch successful Jan 28 01:25:12.149444 coreos-metadata[958]: Jan 28 01:25:12.149 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:25:12.164008 coreos-metadata[958]: Jan 28 01:25:12.160 INFO Fetch successful Jan 28 01:25:12.203238 coreos-metadata[958]: Jan 28 01:25:12.203 INFO wrote hostname ci-4081.3.6-n-11aaf12d54 to /sysroot/etc/hostname Jan 28 01:25:12.211466 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:25:12.359343 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:25:12.394271 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:25:12.416589 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:25:12.439512 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:25:13.467339 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:25:13.478512 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:25:13.486305 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:25:13.504367 kernel: BTRFS info (device sda6): last unmount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:13.504718 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:25:13.528548 ignition[1059]: INFO : Ignition 2.19.0 Jan 28 01:25:13.528548 ignition[1059]: INFO : Stage: mount Jan 28 01:25:13.535401 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:13.535401 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:13.535401 ignition[1059]: INFO : mount: mount passed Jan 28 01:25:13.535401 ignition[1059]: INFO : Ignition finished successfully Jan 28 01:25:13.539861 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:25:13.561246 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:25:13.572834 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:25:13.590718 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:25:13.613175 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Jan 28 01:25:13.625161 kernel: BTRFS info (device sda6): first mount of filesystem 11ff68ea-4313-40eb-9d5c-ba27cd060334 Jan 28 01:25:13.625206 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 01:25:13.629525 kernel: BTRFS info (device sda6): using free space tree Jan 28 01:25:13.637177 kernel: BTRFS info (device sda6): auto enabling async discard Jan 28 01:25:13.638487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:25:13.664618 ignition[1090]: INFO : Ignition 2.19.0 Jan 28 01:25:13.664618 ignition[1090]: INFO : Stage: files Jan 28 01:25:13.671522 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:13.671522 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:13.671522 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:25:13.671522 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:25:13.671522 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:25:13.743262 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:25:13.749579 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:25:13.749579 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:25:13.743631 unknown[1090]: wrote ssh authorized keys file for user: core Jan 28 01:25:13.773573 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:25:13.782269 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:25:13.782269 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 01:25:13.782269 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 28 01:25:13.825468 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 01:25:13.972418 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:25:13.981346 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 28 01:25:14.553069 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 01:25:15.096086 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 01:25:15.096086 ignition[1090]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 28 01:25:15.116694 ignition[1090]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:25:15.128435 ignition[1090]: INFO : files: files passed Jan 28 01:25:15.128435 ignition[1090]: INFO : Ignition finished successfully Jan 28 01:25:15.128502 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:25:15.165406 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:25:15.183302 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:25:15.280692 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:25:15.197505 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:25:15.293492 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:25:15.293492 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:25:15.197600 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:25:15.232862 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:25:15.240330 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:25:15.256379 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:25:15.295723 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:25:15.295836 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:25:15.308027 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:25:15.321548 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:25:15.332616 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:25:15.350414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:25:15.375554 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:25:15.394461 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:25:15.415132 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:25:15.424276 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:25:15.434520 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:25:15.444029 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:25:15.444100 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:25:15.457274 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:25:15.467230 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:25:15.475908 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:25:15.485362 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:25:15.495240 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:25:15.505254 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:25:15.514604 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:25:15.524425 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:25:15.535534 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:25:15.544759 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:25:15.553825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:25:15.553888 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:25:15.567469 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:25:15.572950 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:25:15.583616 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:25:15.588318 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:25:15.594794 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:25:15.594855 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:25:15.610697 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:25:15.610739 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:25:15.617314 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:25:15.617352 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:25:15.629624 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 01:25:15.629678 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:25:15.660343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:25:15.704245 ignition[1141]: INFO : Ignition 2.19.0 Jan 28 01:25:15.704245 ignition[1141]: INFO : Stage: umount Jan 28 01:25:15.704245 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:25:15.704245 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:25:15.704245 ignition[1141]: INFO : umount: umount passed Jan 28 01:25:15.704245 ignition[1141]: INFO : Ignition finished successfully Jan 28 01:25:15.673327 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:25:15.673412 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:25:15.704286 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:25:15.708517 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:25:15.708580 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:25:15.714446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:25:15.714494 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:25:15.728286 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:25:15.728371 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:25:15.744194 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:25:15.744722 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:25:15.747183 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:25:15.756738 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:25:15.756870 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:25:15.768365 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:25:15.768427 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:25:15.777161 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 01:25:15.777207 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 01:25:15.786429 systemd[1]: Stopped target network.target - Network. Jan 28 01:25:15.795638 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:25:15.795743 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:25:15.811164 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:25:15.819130 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:25:15.828212 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:25:15.838454 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:25:15.848111 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:25:15.852460 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:25:15.852510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:25:15.861358 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:25:15.861398 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:25:15.869546 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:25:15.869591 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:25:15.878034 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:25:15.878070 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:25:15.886737 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:25:15.895034 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:25:15.905381 systemd-networkd[901]: eth0: DHCPv6 lease lost Jan 28 01:25:15.906758 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:25:15.906859 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:25:15.919836 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:25:16.102279 kernel: hv_netvsc 000d3ac4-4b3e-000d-3ac4-4b3e000d3ac4 eth0: Data path switched from VF: enP39798s1 Jan 28 01:25:15.919950 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:25:15.931582 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:25:15.931627 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:25:15.954335 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:25:15.964962 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:25:15.965037 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:25:15.974154 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:25:15.974196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:25:15.982454 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:25:15.982490 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:25:15.990672 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:25:15.990709 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:25:16.001245 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:25:16.036545 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:25:16.036704 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:25:16.047958 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:25:16.048042 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:25:16.059904 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:25:16.059938 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:25:16.064754 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:25:16.064801 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:25:16.077447 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:25:16.077494 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:25:16.086350 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:25:16.086392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:25:16.116592 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:25:16.126823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:25:16.126901 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:25:16.133118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:25:16.133174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:16.143685 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:25:16.145173 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:25:16.153687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:25:16.153774 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:25:16.168050 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:25:16.168891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:25:16.209560 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:25:16.209716 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:25:16.218617 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:25:16.247404 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:25:16.461913 systemd[1]: Switching root. Jan 28 01:25:16.492054 systemd-journald[217]: Journal stopped Jan 28 01:25:21.447115 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 28 01:25:21.447139 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:25:21.447161 kernel: SELinux: policy capability open_perms=1 Jan 28 01:25:21.447173 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:25:21.447180 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:25:21.447188 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:25:21.447199 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:25:21.447207 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:25:21.447215 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:25:21.447223 kernel: audit: type=1403 audit(1769563518.154:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 01:25:21.447234 systemd[1]: Successfully loaded SELinux policy in 180.006ms. Jan 28 01:25:21.447243 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.751ms. Jan 28 01:25:21.447254 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:25:21.447263 systemd[1]: Detected virtualization microsoft. Jan 28 01:25:21.447272 systemd[1]: Detected architecture arm64. Jan 28 01:25:21.447282 systemd[1]: Detected first boot. Jan 28 01:25:21.447292 systemd[1]: Hostname set to . Jan 28 01:25:21.447301 systemd[1]: Initializing machine ID from random generator. Jan 28 01:25:21.447310 zram_generator::config[1201]: No configuration found. Jan 28 01:25:21.447319 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:25:21.447328 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:25:21.447339 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 28 01:25:21.447348 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:25:21.447358 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:25:21.447367 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:25:21.447376 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:25:21.447386 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:25:21.447396 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:25:21.447407 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:25:21.447416 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:25:21.447425 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:25:21.447435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:25:21.447444 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:25:21.447454 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:25:21.447463 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:25:21.447472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:25:21.447481 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 28 01:25:21.447492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:25:21.447502 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:25:21.447511 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:25:21.447522 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:25:21.447532 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:25:21.447542 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:25:21.447551 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:25:21.447562 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:25:21.447572 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:25:21.447581 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:25:21.447591 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:25:21.447601 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:25:21.447611 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:25:21.447620 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:25:21.447631 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:25:21.447641 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:25:21.447651 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:25:21.447660 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:25:21.447670 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:25:21.447679 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:25:21.447691 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:25:21.447701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:25:21.447711 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:25:21.447720 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:25:21.447730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:25:21.447740 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:25:21.447749 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:25:21.447759 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:25:21.447768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:25:21.447779 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:25:21.447789 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 28 01:25:21.447799 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 28 01:25:21.447809 kernel: fuse: init (API version 7.39) Jan 28 01:25:21.447818 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:25:21.447827 kernel: ACPI: bus type drm_connector registered Jan 28 01:25:21.447836 kernel: loop: module loaded Jan 28 01:25:21.447845 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:25:21.447856 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:25:21.447866 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:25:21.447892 systemd-journald[1303]: Collecting audit messages is disabled. Jan 28 01:25:21.447915 systemd-journald[1303]: Journal started Jan 28 01:25:21.447937 systemd-journald[1303]: Runtime Journal (/run/log/journal/4d80245584ae41a1b320cb84fd5b476b) is 8.0M, max 78.5M, 70.5M free. Jan 28 01:25:21.462550 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:25:21.473442 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:25:21.474618 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:25:21.479256 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:25:21.484577 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:25:21.488811 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:25:21.493642 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:25:21.499414 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:25:21.503821 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:25:21.509084 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:25:21.514901 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:25:21.515047 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:25:21.520455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:25:21.520590 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:25:21.525688 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:25:21.525824 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:25:21.530896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:25:21.531033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:25:21.536629 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:25:21.536764 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:25:21.541715 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:25:21.544316 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:25:21.550440 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:25:21.555872 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:25:21.561610 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:25:21.567104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:25:21.582441 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:25:21.592231 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:25:21.598532 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:25:21.603517 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:25:21.606277 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:25:21.612274 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:25:21.620883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:25:21.621937 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:25:21.626849 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:25:21.628055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:25:21.634368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:25:21.649309 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 01:25:21.659567 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:25:21.666269 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:25:21.674813 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:25:21.683107 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:25:21.684698 systemd-journald[1303]: Time spent on flushing to /var/log/journal/4d80245584ae41a1b320cb84fd5b476b is 37.562ms for 886 entries. Jan 28 01:25:21.684698 systemd-journald[1303]: System Journal (/var/log/journal/4d80245584ae41a1b320cb84fd5b476b) is 11.8M, max 2.6G, 2.6G free. Jan 28 01:25:21.789349 systemd-journald[1303]: Received client request to flush runtime journal. Jan 28 01:25:21.789407 systemd-journald[1303]: /var/log/journal/4d80245584ae41a1b320cb84fd5b476b/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 28 01:25:21.789435 systemd-journald[1303]: Rotating system journal. Jan 28 01:25:21.694776 udevadm[1362]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 28 01:25:21.791091 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:25:21.804602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:25:21.813655 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 28 01:25:21.813672 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 28 01:25:21.821183 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:25:21.832415 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:25:21.948588 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:25:21.960315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:25:21.977545 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 28 01:25:21.977560 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 28 01:25:21.982677 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:25:22.348364 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:25:22.367291 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:25:22.385854 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Jan 28 01:25:22.513072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:25:22.534282 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:25:22.563630 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 28 01:25:22.586420 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:25:22.650797 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:25:22.659187 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:25:22.672161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#311 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:25:22.722197 kernel: hv_vmbus: registering driver hv_balloon Jan 28 01:25:22.729929 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 28 01:25:22.730017 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 28 01:25:22.782983 kernel: hv_vmbus: registering driver hyperv_fb Jan 28 01:25:22.783195 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 28 01:25:22.790641 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 28 01:25:22.790742 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1399) Jan 28 01:25:22.799414 kernel: Console: switching to colour dummy device 80x25 Jan 28 01:25:22.808596 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:25:22.812764 systemd-networkd[1398]: lo: Link UP Jan 28 01:25:22.813216 systemd-networkd[1398]: lo: Gained carrier Jan 28 01:25:22.815028 systemd-networkd[1398]: Enumeration completed Jan 28 01:25:22.816744 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:25:22.816815 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:25:22.816818 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:25:22.834478 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:25:22.872179 kernel: mlx5_core 9b76:00:02.0 enP39798s1: Link up Jan 28 01:25:22.884497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:22.895252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 01:25:22.898156 kernel: hv_netvsc 000d3ac4-4b3e-000d-3ac4-4b3e000d3ac4 eth0: Data path switched to VF: enP39798s1 Jan 28 01:25:22.898610 systemd-networkd[1398]: enP39798s1: Link UP Jan 28 01:25:22.898830 systemd-networkd[1398]: eth0: Link UP Jan 28 01:25:22.898895 systemd-networkd[1398]: eth0: Gained carrier Jan 28 01:25:22.898957 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:25:22.906808 systemd-networkd[1398]: enP39798s1: Gained carrier Jan 28 01:25:22.913264 systemd-networkd[1398]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:25:22.930004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:25:22.930300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:22.941351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:25:22.969087 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 01:25:22.984319 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 01:25:23.077205 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:25:23.100739 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 01:25:23.107731 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:25:23.118274 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 01:25:23.128137 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:25:23.157795 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 01:25:23.164629 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:25:23.171038 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:25:23.171065 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:25:23.175824 systemd[1]: Reached target machines.target - Containers. Jan 28 01:25:23.181461 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 01:25:23.192281 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:25:23.198764 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:25:23.204496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:25:23.206121 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:25:23.213544 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 01:25:23.221420 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:25:23.237082 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:25:23.272314 kernel: loop0: detected capacity change from 0 to 114328 Jan 28 01:25:23.273598 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:25:23.275601 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 01:25:23.285385 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:25:23.356998 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:25:23.694169 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:25:23.733163 kernel: loop1: detected capacity change from 0 to 31320 Jan 28 01:25:24.068264 systemd-networkd[1398]: eth0: Gained IPv6LL Jan 28 01:25:24.073522 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:25:24.148165 kernel: loop2: detected capacity change from 0 to 114432 Jan 28 01:25:24.521166 kernel: loop3: detected capacity change from 0 to 207008 Jan 28 01:25:24.562227 kernel: loop4: detected capacity change from 0 to 114328 Jan 28 01:25:24.574167 kernel: loop5: detected capacity change from 0 to 31320 Jan 28 01:25:24.586165 kernel: loop6: detected capacity change from 0 to 114432 Jan 28 01:25:24.598166 kernel: loop7: detected capacity change from 0 to 207008 Jan 28 01:25:24.610496 (sd-merge)[1511]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 28 01:25:24.610913 (sd-merge)[1511]: Merged extensions into '/usr'. Jan 28 01:25:24.615383 systemd[1]: Reloading requested from client PID 1490 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:25:24.615400 systemd[1]: Reloading... Jan 28 01:25:24.673205 zram_generator::config[1541]: No configuration found. Jan 28 01:25:24.801647 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:25:24.871849 systemd[1]: Reloading finished in 255 ms. Jan 28 01:25:24.886885 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:25:24.903284 systemd[1]: Starting ensure-sysext.service... Jan 28 01:25:24.908675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:25:24.916299 systemd[1]: Reloading requested from client PID 1599 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:25:24.916405 systemd[1]: Reloading... Jan 28 01:25:24.947604 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:25:24.947868 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 01:25:24.948542 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 01:25:24.948768 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 28 01:25:24.948811 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 28 01:25:24.969644 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:25:24.969655 systemd-tmpfiles[1600]: Skipping /boot Jan 28 01:25:24.977447 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:25:24.977461 systemd-tmpfiles[1600]: Skipping /boot Jan 28 01:25:24.978166 zram_generator::config[1625]: No configuration found. Jan 28 01:25:25.095489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:25:25.169394 systemd[1]: Reloading finished in 252 ms. Jan 28 01:25:25.185054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:25:25.201354 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:25:25.208301 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:25:25.226287 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:25:25.233478 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:25:25.242284 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:25:25.257888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:25:25.271644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:25:25.286372 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:25:25.301355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:25:25.310027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:25:25.311254 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:25:25.320445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:25:25.321352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:25:25.326798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:25:25.326953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:25:25.332641 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:25:25.332824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:25:25.350929 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:25:25.358466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:25:25.365407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:25:25.372601 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:25:25.385329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:25:25.395244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:25:25.401067 systemd-resolved[1698]: Positive Trust Anchors: Jan 28 01:25:25.401077 systemd-resolved[1698]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:25:25.401109 systemd-resolved[1698]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:25:25.406590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:25:25.406657 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:25:25.412516 systemd[1]: Finished ensure-sysext.service. Jan 28 01:25:25.416481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:25:25.416635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:25:25.421902 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:25:25.422046 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:25:25.426819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:25:25.426958 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:25:25.433658 augenrules[1737]: No rules Jan 28 01:25:25.433714 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:25:25.433849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:25:25.439451 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:25:25.449995 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:25:25.450075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:25:25.461770 systemd-resolved[1698]: Using system hostname 'ci-4081.3.6-n-11aaf12d54'. Jan 28 01:25:25.463451 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:25:25.468387 systemd[1]: Reached target network.target - Network. Jan 28 01:25:25.472151 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:25:25.477072 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:25:25.836815 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:25:25.843616 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:25:28.630170 ldconfig[1487]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:25:28.639564 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:25:28.650268 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:25:28.662895 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:25:28.668683 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:25:28.673866 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:25:28.679509 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:25:28.685244 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:25:28.690232 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:25:28.695842 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:25:28.701587 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:25:28.701620 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:25:28.705673 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:25:28.710537 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:25:28.716649 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:25:28.722284 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:25:28.728214 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:25:28.732901 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:25:28.737521 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:25:28.741756 systemd[1]: System is tainted: cgroupsv1 Jan 28 01:25:28.741796 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:25:28.741819 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:25:28.743920 systemd[1]: Starting chronyd.service - NTP client/server... Jan 28 01:25:28.750290 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:25:28.767305 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 01:25:28.777312 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:25:28.785422 (chronyd)[1759]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 28 01:25:28.785990 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:25:28.791823 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:25:28.796512 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:25:28.796632 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 28 01:25:28.798330 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 28 01:25:28.806520 KVP[1768]: KVP starting; pid is:1768 Jan 28 01:25:28.808572 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 28 01:25:28.811169 jq[1766]: false Jan 28 01:25:28.811895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:25:28.815263 chronyd[1773]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 28 01:25:28.819401 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:25:28.827169 kernel: hv_utils: KVP IC version 4.0 Jan 28 01:25:28.827329 KVP[1768]: KVP LIC Version: 3.1 Jan 28 01:25:28.833727 chronyd[1773]: Timezone right/UTC failed leap second check, ignoring Jan 28 01:25:28.833890 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:25:28.833910 chronyd[1773]: Loaded seccomp filter (level 2) Jan 28 01:25:28.843225 extend-filesystems[1767]: Found loop4 Jan 28 01:25:28.843225 extend-filesystems[1767]: Found loop5 Jan 28 01:25:28.843225 extend-filesystems[1767]: Found loop6 Jan 28 01:25:28.843225 extend-filesystems[1767]: Found loop7 Jan 28 01:25:28.843225 extend-filesystems[1767]: Found sda Jan 28 01:25:28.843225 extend-filesystems[1767]: Found sda1 Jan 28 01:25:28.843225 extend-filesystems[1767]: Found sda2 Jan 28 01:25:28.847245 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:25:28.910399 dbus-daemon[1763]: [system] SELinux support is enabled Jan 28 01:25:28.919887 extend-filesystems[1767]: Found sda3 Jan 28 01:25:28.919887 extend-filesystems[1767]: Found usr Jan 28 01:25:28.919887 extend-filesystems[1767]: Found sda4 Jan 28 01:25:28.919887 extend-filesystems[1767]: Found sda6 Jan 28 01:25:28.919887 extend-filesystems[1767]: Found sda7 Jan 28 01:25:28.919887 extend-filesystems[1767]: Found sda9 Jan 28 01:25:28.919887 extend-filesystems[1767]: Checking size of /dev/sda9 Jan 28 01:25:28.857283 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:25:29.070845 extend-filesystems[1767]: Old size kept for /dev/sda9 Jan 28 01:25:29.070845 extend-filesystems[1767]: Found sr0 Jan 28 01:25:29.081831 coreos-metadata[1761]: Jan 28 01:25:29.025 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:25:29.081831 coreos-metadata[1761]: Jan 28 01:25:29.027 INFO Fetch successful Jan 28 01:25:29.081831 coreos-metadata[1761]: Jan 28 01:25:29.027 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 28 01:25:29.081831 coreos-metadata[1761]: Jan 28 01:25:29.032 INFO Fetch successful Jan 28 01:25:29.081831 coreos-metadata[1761]: Jan 28 01:25:29.032 INFO Fetching http://168.63.129.16/machine/944b3217-ca9c-4b67-a233-f1d622e38f6d/06eec06f%2D90b3%2D4c49%2Da2d8%2Dec0c4f11f508.%5Fci%2D4081.3.6%2Dn%2D11aaf12d54?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 28 01:25:29.081831 coreos-metadata[1761]: Jan 28 01:25:29.040 INFO Fetch successful Jan 28 01:25:29.081831 coreos-metadata[1761]: Jan 28 01:25:29.040 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:25:29.081831 coreos-metadata[1761]: Jan 28 01:25:29.063 INFO Fetch successful Jan 28 01:25:28.877501 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:25:28.890525 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:25:28.909901 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:25:28.915874 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:25:28.944463 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:25:29.083791 update_engine[1795]: I20260128 01:25:29.032214 1795 main.cc:92] Flatcar Update Engine starting Jan 28 01:25:29.083791 update_engine[1795]: I20260128 01:25:29.045358 1795 update_check_scheduler.cc:74] Next update check in 4m26s Jan 28 01:25:28.973829 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:25:29.084072 jq[1803]: true Jan 28 01:25:28.984770 systemd[1]: Started chronyd.service - NTP client/server. Jan 28 01:25:29.000491 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:25:29.000731 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:25:29.000960 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:25:29.002264 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:25:29.014464 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:25:29.014699 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:25:29.027839 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:25:29.047499 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:25:29.047726 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:25:29.087771 systemd-logind[1791]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 28 01:25:29.091953 systemd-logind[1791]: New seat seat0. Jan 28 01:25:29.097398 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:25:29.104503 jq[1825]: true Jan 28 01:25:29.110512 (ntainerd)[1830]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 01:25:29.243279 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1820) Jan 28 01:25:29.241069 dbus-daemon[1763]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 01:25:29.250163 tar[1818]: linux-arm64/LICENSE Jan 28 01:25:29.250163 tar[1818]: linux-arm64/helm Jan 28 01:25:29.253946 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:25:29.279130 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 01:25:29.293371 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:25:29.295607 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:25:29.295782 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:25:29.306240 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:25:29.306350 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:25:29.315108 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:25:29.322428 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:25:29.401241 bash[1894]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:25:29.407742 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:25:29.419002 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:25:29.637155 locksmithd[1895]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:25:29.819511 tar[1818]: linux-arm64/README.md Jan 28 01:25:29.832659 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:25:29.872365 containerd[1830]: time="2026-01-28T01:25:29.870531080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 01:25:29.903169 containerd[1830]: time="2026-01-28T01:25:29.903059320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:29.904712 containerd[1830]: time="2026-01-28T01:25:29.904672440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:29.904813 containerd[1830]: time="2026-01-28T01:25:29.904801280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 01:25:29.904888 containerd[1830]: time="2026-01-28T01:25:29.904875360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 01:25:29.905080 containerd[1830]: time="2026-01-28T01:25:29.905064240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 01:25:29.905160 containerd[1830]: time="2026-01-28T01:25:29.905137800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:29.905283 containerd[1830]: time="2026-01-28T01:25:29.905266640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:29.905339 containerd[1830]: time="2026-01-28T01:25:29.905327560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:29.905625 containerd[1830]: time="2026-01-28T01:25:29.905606080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:29.905683 containerd[1830]: time="2026-01-28T01:25:29.905672040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:29.905734 containerd[1830]: time="2026-01-28T01:25:29.905720760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:29.905788 containerd[1830]: time="2026-01-28T01:25:29.905775400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:29.905928 containerd[1830]: time="2026-01-28T01:25:29.905913320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:29.906250 containerd[1830]: time="2026-01-28T01:25:29.906233720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:25:29.906465 containerd[1830]: time="2026-01-28T01:25:29.906447640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:25:29.906521 containerd[1830]: time="2026-01-28T01:25:29.906509400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 01:25:29.906663 containerd[1830]: time="2026-01-28T01:25:29.906647680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 01:25:29.906759 containerd[1830]: time="2026-01-28T01:25:29.906747480Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.926334240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.926403440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.926419680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.926434960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.926507640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.926691480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.927040840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.927137360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.927177360Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 01:25:29.927174 containerd[1830]: time="2026-01-28T01:25:29.927190760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927206720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927237760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927251760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927266840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927281280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927293680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927305080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927317000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927337000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927352240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927365360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927378560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927397440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927463 containerd[1830]: time="2026-01-28T01:25:29.927423160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927436560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927449520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927462280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927477200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927489920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927505240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927517120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927532880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927553200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927565320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927576360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927622280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927638800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 01:25:29.927700 containerd[1830]: time="2026-01-28T01:25:29.927649600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 01:25:29.927978 containerd[1830]: time="2026-01-28T01:25:29.927660800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 01:25:29.927978 containerd[1830]: time="2026-01-28T01:25:29.927670760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.927978 containerd[1830]: time="2026-01-28T01:25:29.927682400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 01:25:29.927978 containerd[1830]: time="2026-01-28T01:25:29.927695160Z" level=info msg="NRI interface is disabled by configuration." Jan 28 01:25:29.927978 containerd[1830]: time="2026-01-28T01:25:29.927706280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 01:25:29.928067 containerd[1830]: time="2026-01-28T01:25:29.927974840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 01:25:29.928067 containerd[1830]: time="2026-01-28T01:25:29.928036040Z" level=info msg="Connect containerd service" Jan 28 01:25:29.928216 containerd[1830]: time="2026-01-28T01:25:29.928092880Z" level=info msg="using legacy CRI server" Jan 28 01:25:29.928216 containerd[1830]: time="2026-01-28T01:25:29.928101120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:25:29.929234 containerd[1830]: time="2026-01-28T01:25:29.929212120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 01:25:29.931179 containerd[1830]: time="2026-01-28T01:25:29.930353760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:25:29.932283 containerd[1830]: time="2026-01-28T01:25:29.932254440Z" level=info msg="Start subscribing containerd event" Jan 28 01:25:29.932395 containerd[1830]: time="2026-01-28T01:25:29.932382720Z" level=info msg="Start recovering state" Jan 28 01:25:29.935157 containerd[1830]: time="2026-01-28T01:25:29.933157680Z" level=info msg="Start event monitor" Jan 28 01:25:29.935157 containerd[1830]: time="2026-01-28T01:25:29.933175560Z" level=info msg="Start snapshots syncer" Jan 28 01:25:29.935157 containerd[1830]: time="2026-01-28T01:25:29.933186840Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:25:29.935157 containerd[1830]: time="2026-01-28T01:25:29.933194920Z" level=info msg="Start streaming server" Jan 28 01:25:29.935157 containerd[1830]: time="2026-01-28T01:25:29.933396720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:25:29.935157 containerd[1830]: time="2026-01-28T01:25:29.933435000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:25:29.935157 containerd[1830]: time="2026-01-28T01:25:29.933482960Z" level=info msg="containerd successfully booted in 0.065542s" Jan 28 01:25:29.933597 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:25:30.114629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:25:30.120217 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:25:30.437321 sshd_keygen[1800]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:25:30.465114 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:25:30.476421 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:25:30.484851 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 28 01:25:30.489855 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:25:30.490065 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:25:30.500476 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:25:30.520139 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:25:30.537357 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 28 01:25:30.547514 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:25:30.562565 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 28 01:25:30.567817 kubelet[1923]: E0128 01:25:30.567783 1923 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:25:30.569689 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:25:30.575025 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:25:30.580668 systemd[1]: Startup finished in 13.138s (kernel) + 12.604s (userspace) = 25.743s. Jan 28 01:25:30.581671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:25:30.581892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:25:30.939858 login[1956]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:30.941305 login[1957]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:30.947991 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:25:30.957373 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:25:30.959271 systemd-logind[1791]: New session 2 of user core. Jan 28 01:25:30.962351 systemd-logind[1791]: New session 1 of user core. Jan 28 01:25:30.984483 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:25:30.991455 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:25:30.995004 (systemd)[1968]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 01:25:31.122652 systemd[1968]: Queued start job for default target default.target. Jan 28 01:25:31.123612 systemd[1968]: Created slice app.slice - User Application Slice. Jan 28 01:25:31.123725 systemd[1968]: Reached target paths.target - Paths. Jan 28 01:25:31.123801 systemd[1968]: Reached target timers.target - Timers. Jan 28 01:25:31.133258 systemd[1968]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:25:31.140864 systemd[1968]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:25:31.141046 systemd[1968]: Reached target sockets.target - Sockets. Jan 28 01:25:31.141136 systemd[1968]: Reached target basic.target - Basic System. Jan 28 01:25:31.141250 systemd[1968]: Reached target default.target - Main User Target. Jan 28 01:25:31.141343 systemd[1968]: Startup finished in 141ms. Jan 28 01:25:31.141355 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:25:31.146458 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:25:31.147127 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 01:25:33.050629 waagent[1953]: 2026-01-28T01:25:33.050538Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 28 01:25:33.055342 waagent[1953]: 2026-01-28T01:25:33.055283Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 28 01:25:33.059542 waagent[1953]: 2026-01-28T01:25:33.059496Z INFO Daemon Daemon Python: 3.11.9 Jan 28 01:25:33.063529 waagent[1953]: 2026-01-28T01:25:33.063481Z INFO Daemon Daemon Run daemon Jan 28 01:25:33.067140 waagent[1953]: 2026-01-28T01:25:33.067101Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 28 01:25:33.074829 waagent[1953]: 2026-01-28T01:25:33.074786Z INFO Daemon Daemon Using waagent for provisioning Jan 28 01:25:33.079383 waagent[1953]: 2026-01-28T01:25:33.079345Z INFO Daemon Daemon Activate resource disk Jan 28 01:25:33.083312 waagent[1953]: 2026-01-28T01:25:33.083273Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 28 01:25:33.095528 waagent[1953]: 2026-01-28T01:25:33.095361Z INFO Daemon Daemon Found device: None Jan 28 01:25:33.101162 waagent[1953]: 2026-01-28T01:25:33.099956Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 28 01:25:33.107255 waagent[1953]: 2026-01-28T01:25:33.107193Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 28 01:25:33.117974 waagent[1953]: 2026-01-28T01:25:33.117928Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 01:25:33.122589 waagent[1953]: 2026-01-28T01:25:33.122551Z INFO Daemon Daemon Running default provisioning handler Jan 28 01:25:33.132669 waagent[1953]: 2026-01-28T01:25:33.132602Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 28 01:25:33.144196 waagent[1953]: 2026-01-28T01:25:33.144122Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 28 01:25:33.152080 waagent[1953]: 2026-01-28T01:25:33.152037Z INFO Daemon Daemon cloud-init is enabled: False Jan 28 01:25:33.156248 waagent[1953]: 2026-01-28T01:25:33.156210Z INFO Daemon Daemon Copying ovf-env.xml Jan 28 01:25:33.262229 waagent[1953]: 2026-01-28T01:25:33.262116Z INFO Daemon Daemon Successfully mounted dvd Jan 28 01:25:33.277365 waagent[1953]: 2026-01-28T01:25:33.277294Z INFO Daemon Daemon Detect protocol endpoint Jan 28 01:25:33.277845 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 28 01:25:33.281584 waagent[1953]: 2026-01-28T01:25:33.281526Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 01:25:33.286234 waagent[1953]: 2026-01-28T01:25:33.286192Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 28 01:25:33.291435 waagent[1953]: 2026-01-28T01:25:33.291395Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 28 01:25:33.296242 waagent[1953]: 2026-01-28T01:25:33.296201Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 28 01:25:33.300608 waagent[1953]: 2026-01-28T01:25:33.300569Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 28 01:25:33.345183 waagent[1953]: 2026-01-28T01:25:33.345132Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 28 01:25:33.350793 waagent[1953]: 2026-01-28T01:25:33.350769Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 28 01:25:33.354974 waagent[1953]: 2026-01-28T01:25:33.354939Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 28 01:25:33.498187 waagent[1953]: 2026-01-28T01:25:33.497683Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 28 01:25:33.503604 waagent[1953]: 2026-01-28T01:25:33.503536Z INFO Daemon Daemon Forcing an update of the goal state. Jan 28 01:25:33.511234 waagent[1953]: 2026-01-28T01:25:33.511185Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 01:25:33.531033 waagent[1953]: 2026-01-28T01:25:33.530992Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 28 01:25:33.536245 waagent[1953]: 2026-01-28T01:25:33.536205Z INFO Daemon Jan 28 01:25:33.538557 waagent[1953]: 2026-01-28T01:25:33.538520Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e1df821b-22be-486d-9d58-eefd7262dc7d eTag: 16465208202113842582 source: Fabric] Jan 28 01:25:33.548410 waagent[1953]: 2026-01-28T01:25:33.548367Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 28 01:25:33.553975 waagent[1953]: 2026-01-28T01:25:33.553936Z INFO Daemon Jan 28 01:25:33.556276 waagent[1953]: 2026-01-28T01:25:33.556241Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 28 01:25:33.565275 waagent[1953]: 2026-01-28T01:25:33.565242Z INFO Daemon Daemon Downloading artifacts profile blob Jan 28 01:25:33.639556 waagent[1953]: 2026-01-28T01:25:33.639421Z INFO Daemon Downloaded certificate {'thumbprint': '4F14C87610DA6A20ADE21C756820107636D3983C', 'hasPrivateKey': True} Jan 28 01:25:33.648267 waagent[1953]: 2026-01-28T01:25:33.648220Z INFO Daemon Fetch goal state completed Jan 28 01:25:33.658367 waagent[1953]: 2026-01-28T01:25:33.658328Z INFO Daemon Daemon Starting provisioning Jan 28 01:25:33.662532 waagent[1953]: 2026-01-28T01:25:33.662485Z INFO Daemon Daemon Handle ovf-env.xml. Jan 28 01:25:33.666301 waagent[1953]: 2026-01-28T01:25:33.666266Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-11aaf12d54] Jan 28 01:25:33.675159 waagent[1953]: 2026-01-28T01:25:33.673075Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-11aaf12d54] Jan 28 01:25:33.678588 waagent[1953]: 2026-01-28T01:25:33.678524Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 28 01:25:33.683940 waagent[1953]: 2026-01-28T01:25:33.683883Z INFO Daemon Daemon Primary interface is [eth0] Jan 28 01:25:33.713807 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:25:33.713814 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:25:33.713836 systemd-networkd[1398]: eth0: DHCP lease lost Jan 28 01:25:33.715269 waagent[1953]: 2026-01-28T01:25:33.715176Z INFO Daemon Daemon Create user account if not exists Jan 28 01:25:33.719814 waagent[1953]: 2026-01-28T01:25:33.719757Z INFO Daemon Daemon User core already exists, skip useradd Jan 28 01:25:33.725228 waagent[1953]: 2026-01-28T01:25:33.725142Z INFO Daemon Daemon Configure sudoer Jan 28 01:25:33.726273 systemd-networkd[1398]: eth0: DHCPv6 lease lost Jan 28 01:25:33.729478 waagent[1953]: 2026-01-28T01:25:33.729416Z INFO Daemon Daemon Configure sshd Jan 28 01:25:33.733260 waagent[1953]: 2026-01-28T01:25:33.733196Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 28 01:25:33.743799 waagent[1953]: 2026-01-28T01:25:33.743666Z INFO Daemon Daemon Deploy ssh public key. Jan 28 01:25:33.755240 systemd-networkd[1398]: eth0: DHCPv4 address 10.200.20.23/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 01:25:34.840254 waagent[1953]: 2026-01-28T01:25:34.840207Z INFO Daemon Daemon Provisioning complete Jan 28 01:25:34.856014 waagent[1953]: 2026-01-28T01:25:34.855969Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 28 01:25:34.860807 waagent[1953]: 2026-01-28T01:25:34.860763Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 28 01:25:34.869252 waagent[1953]: 2026-01-28T01:25:34.869204Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 28 01:25:34.996438 waagent[2023]: 2026-01-28T01:25:34.996127Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 28 01:25:34.996438 waagent[2023]: 2026-01-28T01:25:34.996292Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 28 01:25:34.996438 waagent[2023]: 2026-01-28T01:25:34.996355Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 28 01:25:35.031178 waagent[2023]: 2026-01-28T01:25:35.030711Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 28 01:25:35.031178 waagent[2023]: 2026-01-28T01:25:35.030939Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:25:35.031178 waagent[2023]: 2026-01-28T01:25:35.031001Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:25:35.038792 waagent[2023]: 2026-01-28T01:25:35.038735Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 01:25:35.043857 waagent[2023]: 2026-01-28T01:25:35.043821Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 28 01:25:35.044316 waagent[2023]: 2026-01-28T01:25:35.044278Z INFO ExtHandler Jan 28 01:25:35.044386 waagent[2023]: 2026-01-28T01:25:35.044359Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 033d7da6-c260-47f7-8bb7-51c6f1f2ca24 eTag: 16465208202113842582 source: Fabric] Jan 28 01:25:35.044666 waagent[2023]: 2026-01-28T01:25:35.044627Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 28 01:25:35.045241 waagent[2023]: 2026-01-28T01:25:35.045196Z INFO ExtHandler Jan 28 01:25:35.045313 waagent[2023]: 2026-01-28T01:25:35.045283Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 28 01:25:35.048310 waagent[2023]: 2026-01-28T01:25:35.048278Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 28 01:25:35.125096 waagent[2023]: 2026-01-28T01:25:35.124960Z INFO ExtHandler Downloaded certificate {'thumbprint': '4F14C87610DA6A20ADE21C756820107636D3983C', 'hasPrivateKey': True} Jan 28 01:25:35.125575 waagent[2023]: 2026-01-28T01:25:35.125531Z INFO ExtHandler Fetch goal state completed Jan 28 01:25:35.142927 waagent[2023]: 2026-01-28T01:25:35.142875Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2023 Jan 28 01:25:35.143077 waagent[2023]: 2026-01-28T01:25:35.143045Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 28 01:25:35.144670 waagent[2023]: 2026-01-28T01:25:35.144626Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 28 01:25:35.145023 waagent[2023]: 2026-01-28T01:25:35.144988Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 28 01:25:35.178958 waagent[2023]: 2026-01-28T01:25:35.178916Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 28 01:25:35.179142 waagent[2023]: 2026-01-28T01:25:35.179106Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 28 01:25:35.184862 waagent[2023]: 2026-01-28T01:25:35.184812Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 28 01:25:35.190694 systemd[1]: Reloading requested from client PID 2036 ('systemctl') (unit waagent.service)... Jan 28 01:25:35.190710 systemd[1]: Reloading... Jan 28 01:25:35.278176 zram_generator::config[2079]: No configuration found. Jan 28 01:25:35.368525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:25:35.445945 systemd[1]: Reloading finished in 254 ms. Jan 28 01:25:35.464749 waagent[2023]: 2026-01-28T01:25:35.464365Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 28 01:25:35.470446 systemd[1]: Reloading requested from client PID 2129 ('systemctl') (unit waagent.service)... Jan 28 01:25:35.470463 systemd[1]: Reloading... Jan 28 01:25:35.547173 zram_generator::config[2167]: No configuration found. Jan 28 01:25:35.650401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:25:35.727321 systemd[1]: Reloading finished in 256 ms. Jan 28 01:25:35.747429 waagent[2023]: 2026-01-28T01:25:35.746313Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 28 01:25:35.747429 waagent[2023]: 2026-01-28T01:25:35.746466Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 28 01:25:36.132975 waagent[2023]: 2026-01-28T01:25:36.132881Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 28 01:25:36.133558 waagent[2023]: 2026-01-28T01:25:36.133508Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 28 01:25:36.134327 waagent[2023]: 2026-01-28T01:25:36.134251Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 28 01:25:36.134706 waagent[2023]: 2026-01-28T01:25:36.134615Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 28 01:25:36.135118 waagent[2023]: 2026-01-28T01:25:36.135014Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 28 01:25:36.135234 waagent[2023]: 2026-01-28T01:25:36.135120Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 28 01:25:36.135392 waagent[2023]: 2026-01-28T01:25:36.135329Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:25:36.135749 waagent[2023]: 2026-01-28T01:25:36.135667Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 28 01:25:36.135924 waagent[2023]: 2026-01-28T01:25:36.135863Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 28 01:25:36.136267 waagent[2023]: 2026-01-28T01:25:36.136139Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:25:36.136571 waagent[2023]: 2026-01-28T01:25:36.136524Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:25:36.137338 waagent[2023]: 2026-01-28T01:25:36.136791Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:25:36.137338 waagent[2023]: 2026-01-28T01:25:36.137007Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 28 01:25:36.137338 waagent[2023]: 2026-01-28T01:25:36.137202Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 28 01:25:36.137338 waagent[2023]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 28 01:25:36.137338 waagent[2023]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 28 01:25:36.137338 waagent[2023]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 28 01:25:36.137338 waagent[2023]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:25:36.137338 waagent[2023]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:25:36.137338 waagent[2023]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:25:36.137635 waagent[2023]: 2026-01-28T01:25:36.137590Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 28 01:25:36.138863 waagent[2023]: 2026-01-28T01:25:36.137799Z INFO EnvHandler ExtHandler Configure routes Jan 28 01:25:36.140421 waagent[2023]: 2026-01-28T01:25:36.140364Z INFO EnvHandler ExtHandler Gateway:None Jan 28 01:25:36.140595 waagent[2023]: 2026-01-28T01:25:36.140557Z INFO EnvHandler ExtHandler Routes:None Jan 28 01:25:36.211964 waagent[2023]: 2026-01-28T01:25:36.211893Z INFO MonitorHandler ExtHandler Network interfaces: Jan 28 01:25:36.211964 waagent[2023]: Executing ['ip', '-a', '-o', 'link']: Jan 28 01:25:36.211964 waagent[2023]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 28 01:25:36.211964 waagent[2023]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c4:4b:3e brd ff:ff:ff:ff:ff:ff Jan 28 01:25:36.211964 waagent[2023]: 3: enP39798s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c4:4b:3e brd ff:ff:ff:ff:ff:ff\ altname enP39798p0s2 Jan 28 01:25:36.211964 waagent[2023]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 28 01:25:36.211964 waagent[2023]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 28 01:25:36.211964 waagent[2023]: 2: eth0 inet 10.200.20.23/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 28 01:25:36.211964 waagent[2023]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 28 01:25:36.211964 waagent[2023]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 28 01:25:36.211964 waagent[2023]: 2: eth0 inet6 fe80::20d:3aff:fec4:4b3e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 28 01:25:36.250610 waagent[2023]: 2026-01-28T01:25:36.250541Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 28 01:25:36.250610 waagent[2023]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:25:36.250610 waagent[2023]: pkts bytes target prot opt in out source destination Jan 28 01:25:36.250610 waagent[2023]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:25:36.250610 waagent[2023]: pkts bytes target prot opt in out source destination Jan 28 01:25:36.250610 waagent[2023]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:25:36.250610 waagent[2023]: pkts bytes target prot opt in out source destination Jan 28 01:25:36.250610 waagent[2023]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 01:25:36.250610 waagent[2023]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 01:25:36.250610 waagent[2023]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 01:25:36.253616 waagent[2023]: 2026-01-28T01:25:36.253560Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 28 01:25:36.253616 waagent[2023]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:25:36.253616 waagent[2023]: pkts bytes target prot opt in out source destination Jan 28 01:25:36.253616 waagent[2023]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:25:36.253616 waagent[2023]: pkts bytes target prot opt in out source destination Jan 28 01:25:36.253616 waagent[2023]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:25:36.253616 waagent[2023]: pkts bytes target prot opt in out source destination Jan 28 01:25:36.253616 waagent[2023]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 01:25:36.253616 waagent[2023]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 01:25:36.253616 waagent[2023]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 01:25:36.253850 waagent[2023]: 2026-01-28T01:25:36.253816Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 28 01:25:36.480006 waagent[2023]: 2026-01-28T01:25:36.479893Z INFO ExtHandler ExtHandler Jan 28 01:25:36.480094 waagent[2023]: 2026-01-28T01:25:36.480019Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 81712459-4df8-4a43-9335-7e491968bc7b correlation 3b68e56c-7123-4ad3-b8a0-de69d9b60766 created: 2026-01-28T01:24:33.833244Z] Jan 28 01:25:36.480462 waagent[2023]: 2026-01-28T01:25:36.480421Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 28 01:25:36.481014 waagent[2023]: 2026-01-28T01:25:36.480979Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 28 01:25:36.510061 waagent[2023]: 2026-01-28T01:25:36.510003Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5A61D943-28B3-417C-87B7-893DF7D6F8AB;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 28 01:25:40.585520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:25:40.596343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:25:40.710318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:25:40.720555 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:25:40.824169 kubelet[2266]: E0128 01:25:40.824096 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:25:40.829338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:25:40.829504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:25:50.314576 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:25:50.324369 systemd[1]: Started sshd@0-10.200.20.23:22-10.200.16.10:57054.service - OpenSSH per-connection server daemon (10.200.16.10:57054). Jan 28 01:25:50.835462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:25:50.846317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:25:50.894593 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 57054 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:25:50.896540 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:50.901070 systemd-logind[1791]: New session 3 of user core. Jan 28 01:25:50.908419 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:25:50.941418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:25:50.945832 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:25:51.083541 kubelet[2290]: E0128 01:25:51.083493 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:25:51.086869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:25:51.087034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:25:51.317349 systemd[1]: Started sshd@1-10.200.20.23:22-10.200.16.10:57070.service - OpenSSH per-connection server daemon (10.200.16.10:57070). Jan 28 01:25:51.759378 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 57070 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:25:51.760702 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:51.765112 systemd-logind[1791]: New session 4 of user core. Jan 28 01:25:51.772457 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:25:52.091219 sshd[2298]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:52.093721 systemd-logind[1791]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:25:52.094515 systemd[1]: sshd@1-10.200.20.23:22-10.200.16.10:57070.service: Deactivated successfully. Jan 28 01:25:52.097601 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:25:52.098869 systemd-logind[1791]: Removed session 4. Jan 28 01:25:52.171350 systemd[1]: Started sshd@2-10.200.20.23:22-10.200.16.10:57074.service - OpenSSH per-connection server daemon (10.200.16.10:57074). Jan 28 01:25:52.614880 sshd[2306]: Accepted publickey for core from 10.200.16.10 port 57074 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:25:52.616181 sshd[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:52.619683 systemd-logind[1791]: New session 5 of user core. Jan 28 01:25:52.626352 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:25:52.642543 chronyd[1773]: Selected source PHC0 Jan 28 01:25:52.944352 sshd[2306]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:52.948428 systemd[1]: sshd@2-10.200.20.23:22-10.200.16.10:57074.service: Deactivated successfully. Jan 28 01:25:52.949199 systemd-logind[1791]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:25:52.951118 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:25:52.951805 systemd-logind[1791]: Removed session 5. Jan 28 01:25:53.039530 systemd[1]: Started sshd@3-10.200.20.23:22-10.200.16.10:57084.service - OpenSSH per-connection server daemon (10.200.16.10:57084). Jan 28 01:25:53.518871 sshd[2314]: Accepted publickey for core from 10.200.16.10 port 57084 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:25:53.520205 sshd[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:53.525054 systemd-logind[1791]: New session 6 of user core. Jan 28 01:25:53.531403 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:25:53.869343 sshd[2314]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:53.873292 systemd-logind[1791]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:25:53.873903 systemd[1]: sshd@3-10.200.20.23:22-10.200.16.10:57084.service: Deactivated successfully. Jan 28 01:25:53.876423 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:25:53.877488 systemd-logind[1791]: Removed session 6. Jan 28 01:25:53.951490 systemd[1]: Started sshd@4-10.200.20.23:22-10.200.16.10:57092.service - OpenSSH per-connection server daemon (10.200.16.10:57092). Jan 28 01:25:54.397253 sshd[2322]: Accepted publickey for core from 10.200.16.10 port 57092 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:25:54.398539 sshd[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:54.402062 systemd-logind[1791]: New session 7 of user core. Jan 28 01:25:54.412515 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:25:54.795544 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:25:54.795815 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:25:54.826853 sudo[2326]: pam_unix(sudo:session): session closed for user root Jan 28 01:25:54.903398 sshd[2322]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:54.907536 systemd[1]: sshd@4-10.200.20.23:22-10.200.16.10:57092.service: Deactivated successfully. Jan 28 01:25:54.910027 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:25:54.910291 systemd-logind[1791]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:25:54.912074 systemd-logind[1791]: Removed session 7. Jan 28 01:25:54.993471 systemd[1]: Started sshd@5-10.200.20.23:22-10.200.16.10:57102.service - OpenSSH per-connection server daemon (10.200.16.10:57102). Jan 28 01:25:55.436493 sshd[2331]: Accepted publickey for core from 10.200.16.10 port 57102 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:25:55.437803 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:55.442630 systemd-logind[1791]: New session 8 of user core. Jan 28 01:25:55.447399 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:25:55.691063 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:25:55.691404 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:25:55.694991 sudo[2336]: pam_unix(sudo:session): session closed for user root Jan 28 01:25:55.699311 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 01:25:55.699567 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:25:55.713799 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 01:25:55.714605 auditctl[2339]: No rules Jan 28 01:25:55.715052 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:25:55.715306 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 01:25:55.718395 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:25:55.740537 augenrules[2358]: No rules Jan 28 01:25:55.741918 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:25:55.743885 sudo[2335]: pam_unix(sudo:session): session closed for user root Jan 28 01:25:55.822350 sshd[2331]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:55.826223 systemd-logind[1791]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:25:55.826828 systemd[1]: sshd@5-10.200.20.23:22-10.200.16.10:57102.service: Deactivated successfully. Jan 28 01:25:55.829124 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:25:55.829935 systemd-logind[1791]: Removed session 8. Jan 28 01:25:55.901702 systemd[1]: Started sshd@6-10.200.20.23:22-10.200.16.10:57116.service - OpenSSH per-connection server daemon (10.200.16.10:57116). Jan 28 01:25:56.341041 sshd[2367]: Accepted publickey for core from 10.200.16.10 port 57116 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:25:56.343449 sshd[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:56.347045 systemd-logind[1791]: New session 9 of user core. Jan 28 01:25:56.354399 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:25:56.594693 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:25:56.594943 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:25:57.534337 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:25:57.534544 (dockerd)[2386]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:25:58.085178 dockerd[2386]: time="2026-01-28T01:25:58.084988539Z" level=info msg="Starting up" Jan 28 01:25:58.417975 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2354030896-merged.mount: Deactivated successfully. Jan 28 01:25:58.810026 systemd[1]: var-lib-docker-metacopy\x2dcheck4219611675-merged.mount: Deactivated successfully. Jan 28 01:25:58.824073 dockerd[2386]: time="2026-01-28T01:25:58.823864099Z" level=info msg="Loading containers: start." Jan 28 01:25:58.974174 kernel: Initializing XFRM netlink socket Jan 28 01:25:59.408804 systemd-networkd[1398]: docker0: Link UP Jan 28 01:25:59.429170 dockerd[2386]: time="2026-01-28T01:25:59.429075859Z" level=info msg="Loading containers: done." Jan 28 01:25:59.440234 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck323529173-merged.mount: Deactivated successfully. Jan 28 01:25:59.456985 dockerd[2386]: time="2026-01-28T01:25:59.456938299Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:25:59.457249 dockerd[2386]: time="2026-01-28T01:25:59.457232099Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 01:25:59.457437 dockerd[2386]: time="2026-01-28T01:25:59.457421579Z" level=info msg="Daemon has completed initialization" Jan 28 01:25:59.507964 dockerd[2386]: time="2026-01-28T01:25:59.507891059Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:25:59.508565 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:26:00.253365 containerd[1830]: time="2026-01-28T01:26:00.253330859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 01:26:01.156372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:26:01.162296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:01.190263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101453105.mount: Deactivated successfully. Jan 28 01:26:01.318346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:01.321842 (kubelet)[2540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:26:01.705837 kubelet[2540]: E0128 01:26:01.423020 2540 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:26:01.426333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:26:01.426483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:26:02.896993 containerd[1830]: time="2026-01-28T01:26:02.896938625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:02.899208 containerd[1830]: time="2026-01-28T01:26:02.899181576Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 28 01:26:02.902240 containerd[1830]: time="2026-01-28T01:26:02.902185563Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:02.907265 containerd[1830]: time="2026-01-28T01:26:02.907219702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:02.908330 containerd[1830]: time="2026-01-28T01:26:02.907911019Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.65454292s" Jan 28 01:26:02.908330 containerd[1830]: time="2026-01-28T01:26:02.907946139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 28 01:26:02.908584 containerd[1830]: time="2026-01-28T01:26:02.908555417Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 01:26:04.327412 containerd[1830]: time="2026-01-28T01:26:04.327369383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:04.332699 containerd[1830]: time="2026-01-28T01:26:04.332667321Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 28 01:26:04.335134 containerd[1830]: time="2026-01-28T01:26:04.335094751Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:04.339570 containerd[1830]: time="2026-01-28T01:26:04.339531572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:04.340766 containerd[1830]: time="2026-01-28T01:26:04.340640728Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.432046672s" Jan 28 01:26:04.340766 containerd[1830]: time="2026-01-28T01:26:04.340672608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 28 01:26:04.341615 containerd[1830]: time="2026-01-28T01:26:04.341592884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 01:26:05.507178 containerd[1830]: time="2026-01-28T01:26:05.506651472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:05.508990 containerd[1830]: time="2026-01-28T01:26:05.508956429Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 28 01:26:05.511631 containerd[1830]: time="2026-01-28T01:26:05.511586025Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:05.515806 containerd[1830]: time="2026-01-28T01:26:05.515752340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:05.517159 containerd[1830]: time="2026-01-28T01:26:05.516805578Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.175181454s" Jan 28 01:26:05.517159 containerd[1830]: time="2026-01-28T01:26:05.516836378Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 28 01:26:05.517461 containerd[1830]: time="2026-01-28T01:26:05.517440377Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 01:26:06.633371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320847441.mount: Deactivated successfully. Jan 28 01:26:06.918817 containerd[1830]: time="2026-01-28T01:26:06.918093027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:06.920800 containerd[1830]: time="2026-01-28T01:26:06.920769464Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 28 01:26:06.926595 containerd[1830]: time="2026-01-28T01:26:06.925699017Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:06.929208 containerd[1830]: time="2026-01-28T01:26:06.929178773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:06.929693 containerd[1830]: time="2026-01-28T01:26:06.929654812Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.412031075s" Jan 28 01:26:06.929756 containerd[1830]: time="2026-01-28T01:26:06.929693252Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 28 01:26:06.930244 containerd[1830]: time="2026-01-28T01:26:06.930221531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 01:26:07.557431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726444500.mount: Deactivated successfully. Jan 28 01:26:08.904974 containerd[1830]: time="2026-01-28T01:26:08.904933109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:08.907781 containerd[1830]: time="2026-01-28T01:26:08.907752385Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 28 01:26:08.910987 containerd[1830]: time="2026-01-28T01:26:08.910961021Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:08.915312 containerd[1830]: time="2026-01-28T01:26:08.915278215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:08.916096 containerd[1830]: time="2026-01-28T01:26:08.916065293Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.985731442s" Jan 28 01:26:08.916212 containerd[1830]: time="2026-01-28T01:26:08.916195053Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 28 01:26:08.916878 containerd[1830]: time="2026-01-28T01:26:08.916636133Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:26:09.458319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153076613.mount: Deactivated successfully. Jan 28 01:26:09.523142 containerd[1830]: time="2026-01-28T01:26:09.522400581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:09.524349 containerd[1830]: time="2026-01-28T01:26:09.524324098Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 28 01:26:09.527174 containerd[1830]: time="2026-01-28T01:26:09.527124894Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:09.531007 containerd[1830]: time="2026-01-28T01:26:09.530967168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:09.532007 containerd[1830]: time="2026-01-28T01:26:09.531896327Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 615.230274ms" Jan 28 01:26:09.532007 containerd[1830]: time="2026-01-28T01:26:09.531925327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 28 01:26:09.532691 containerd[1830]: time="2026-01-28T01:26:09.532474766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 01:26:10.146043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904433299.mount: Deactivated successfully. Jan 28 01:26:10.840790 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 28 01:26:11.585518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:26:11.596295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:12.463407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:12.467345 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:26:12.820307 kubelet[2728]: E0128 01:26:12.819881 2728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:26:12.822819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:26:12.822989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:26:13.844648 update_engine[1795]: I20260128 01:26:13.844588 1795 update_attempter.cc:509] Updating boot flags... Jan 28 01:26:13.870361 containerd[1830]: time="2026-01-28T01:26:13.870157121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:13.872742 containerd[1830]: time="2026-01-28T01:26:13.872714918Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 28 01:26:13.877765 containerd[1830]: time="2026-01-28T01:26:13.877617110Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:13.888440 containerd[1830]: time="2026-01-28T01:26:13.885129060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:13.888440 containerd[1830]: time="2026-01-28T01:26:13.885957978Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.353451372s" Jan 28 01:26:13.888440 containerd[1830]: time="2026-01-28T01:26:13.887854576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 28 01:26:13.911268 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2760) Jan 28 01:26:18.159683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:18.167335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:18.193167 systemd[1]: Reloading requested from client PID 2813 ('systemctl') (unit session-9.scope)... Jan 28 01:26:18.193182 systemd[1]: Reloading... Jan 28 01:26:18.291174 zram_generator::config[2854]: No configuration found. Jan 28 01:26:18.414399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:26:18.490770 systemd[1]: Reloading finished in 297 ms. Jan 28 01:26:18.544531 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:26:18.544605 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:26:18.545001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:18.556678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:18.705315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:18.716438 (kubelet)[2932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:26:18.880344 kubelet[2932]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:26:18.880344 kubelet[2932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:26:18.880344 kubelet[2932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:26:19.247885 kubelet[2932]: I0128 01:26:18.880407 2932 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:26:20.056172 kubelet[2932]: I0128 01:26:20.055045 2932 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:26:20.056172 kubelet[2932]: I0128 01:26:20.055083 2932 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:26:20.056172 kubelet[2932]: I0128 01:26:20.055539 2932 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:26:20.078358 kubelet[2932]: E0128 01:26:20.078320 2932 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:26:20.080840 kubelet[2932]: I0128 01:26:20.080822 2932 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:26:20.086378 kubelet[2932]: E0128 01:26:20.086347 2932 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:26:20.086431 kubelet[2932]: I0128 01:26:20.086382 2932 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:26:20.089016 kubelet[2932]: I0128 01:26:20.089001 2932 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:26:20.089374 kubelet[2932]: I0128 01:26:20.089350 2932 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:26:20.089525 kubelet[2932]: I0128 01:26:20.089374 2932 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-11aaf12d54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 01:26:20.089610 kubelet[2932]: I0128 01:26:20.089533 2932 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:26:20.089610 kubelet[2932]: I0128 01:26:20.089541 2932 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:26:20.089675 kubelet[2932]: I0128 01:26:20.089659 2932 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:26:20.092460 kubelet[2932]: I0128 01:26:20.092444 2932 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:26:20.092510 kubelet[2932]: I0128 01:26:20.092468 2932 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:26:20.092510 kubelet[2932]: I0128 01:26:20.092486 2932 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:26:20.092510 kubelet[2932]: I0128 01:26:20.092495 2932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:26:20.096138 kubelet[2932]: W0128 01:26:20.096101 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jan 28 01:26:20.096218 kubelet[2932]: E0128 01:26:20.096166 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:26:20.097042 kubelet[2932]: W0128 01:26:20.097009 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-11aaf12d54&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jan 28 01:26:20.097092 kubelet[2932]: E0128 01:26:20.097051 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-11aaf12d54&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:26:20.097140 kubelet[2932]: I0128 01:26:20.097127 2932 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:26:20.097610 kubelet[2932]: I0128 01:26:20.097586 2932 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:26:20.097663 kubelet[2932]: W0128 01:26:20.097639 2932 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:26:20.099051 kubelet[2932]: I0128 01:26:20.099011 2932 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:26:20.099051 kubelet[2932]: I0128 01:26:20.099046 2932 server.go:1287] "Started kubelet" Jan 28 01:26:20.100051 kubelet[2932]: I0128 01:26:20.099804 2932 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:26:20.102858 kubelet[2932]: I0128 01:26:20.102780 2932 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:26:20.103139 kubelet[2932]: I0128 01:26:20.103123 2932 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:26:20.104219 kubelet[2932]: I0128 01:26:20.104200 2932 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:26:20.105055 kubelet[2932]: E0128 01:26:20.104883 2932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.23:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-11aaf12d54.188ec0b357456e39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-11aaf12d54,UID:ci-4081.3.6-n-11aaf12d54,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-11aaf12d54,},FirstTimestamp:2026-01-28 01:26:20.099030585 +0000 UTC m=+1.379797623,LastTimestamp:2026-01-28 01:26:20.099030585 +0000 UTC m=+1.379797623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-11aaf12d54,}" Jan 28 01:26:20.106357 kubelet[2932]: I0128 01:26:20.106331 2932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:26:20.106723 kubelet[2932]: I0128 01:26:20.106707 2932 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:26:20.108416 kubelet[2932]: I0128 01:26:20.108311 2932 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:26:20.109032 kubelet[2932]: I0128 01:26:20.109018 2932 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:26:20.109247 kubelet[2932]: I0128 01:26:20.109181 2932 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:26:20.109598 kubelet[2932]: E0128 01:26:20.109578 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" Jan 28 01:26:20.110380 kubelet[2932]: W0128 01:26:20.110262 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jan 28 01:26:20.110380 kubelet[2932]: E0128 01:26:20.110316 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:26:20.111492 kubelet[2932]: E0128 01:26:20.111463 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-11aaf12d54?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="200ms" Jan 28 01:26:20.112935 kubelet[2932]: I0128 01:26:20.112331 2932 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:26:20.112935 kubelet[2932]: I0128 01:26:20.112406 2932 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:26:20.114118 kubelet[2932]: I0128 01:26:20.114097 2932 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:26:20.115830 kubelet[2932]: E0128 01:26:20.115813 2932 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:26:20.133054 kubelet[2932]: I0128 01:26:20.133010 2932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:26:20.134179 kubelet[2932]: I0128 01:26:20.134071 2932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:26:20.134179 kubelet[2932]: I0128 01:26:20.134094 2932 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:26:20.134179 kubelet[2932]: I0128 01:26:20.134113 2932 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:26:20.134179 kubelet[2932]: I0128 01:26:20.134119 2932 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:26:20.134179 kubelet[2932]: E0128 01:26:20.134163 2932 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:26:20.135740 kubelet[2932]: W0128 01:26:20.135673 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jan 28 01:26:20.135740 kubelet[2932]: E0128 01:26:20.135709 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:26:20.176648 kubelet[2932]: I0128 01:26:20.176397 2932 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:26:20.176648 kubelet[2932]: I0128 01:26:20.176412 2932 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:26:20.176648 kubelet[2932]: I0128 01:26:20.176432 2932 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:26:20.181943 kubelet[2932]: I0128 01:26:20.181925 2932 policy_none.go:49] "None policy: Start" Jan 28 01:26:20.182041 kubelet[2932]: I0128 01:26:20.182032 2932 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:26:20.182103 kubelet[2932]: I0128 01:26:20.182095 2932 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:26:20.189050 kubelet[2932]: I0128 01:26:20.189030 2932 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:26:20.190158 kubelet[2932]: I0128 01:26:20.189327 2932 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:26:20.190158 kubelet[2932]: I0128 01:26:20.189342 2932 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:26:20.190709 kubelet[2932]: I0128 01:26:20.190690 2932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:26:20.191979 kubelet[2932]: E0128 01:26:20.191965 2932 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:26:20.192091 kubelet[2932]: E0128 01:26:20.192081 2932 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-11aaf12d54\" not found" Jan 28 01:26:20.239651 kubelet[2932]: E0128 01:26:20.239622 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.241586 kubelet[2932]: E0128 01:26:20.241321 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.244535 kubelet[2932]: E0128 01:26:20.244453 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.291506 kubelet[2932]: I0128 01:26:20.291478 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.291860 kubelet[2932]: E0128 01:26:20.291820 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.311200 kubelet[2932]: I0128 01:26:20.310245 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.311200 kubelet[2932]: I0128 01:26:20.310282 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.311200 kubelet[2932]: I0128 01:26:20.310301 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.311448 kubelet[2932]: I0128 01:26:20.311217 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aed1e67ed8821205dce5a582738302ac-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-11aaf12d54\" (UID: \"aed1e67ed8821205dce5a582738302ac\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.311448 kubelet[2932]: I0128 01:26:20.311241 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c9d8b2919b6e19ba46c21279a65dfa3-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-11aaf12d54\" (UID: \"4c9d8b2919b6e19ba46c21279a65dfa3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.311448 kubelet[2932]: I0128 01:26:20.311256 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c9d8b2919b6e19ba46c21279a65dfa3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-11aaf12d54\" (UID: \"4c9d8b2919b6e19ba46c21279a65dfa3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.311448 kubelet[2932]: I0128 01:26:20.311281 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.311448 kubelet[2932]: I0128 01:26:20.311298 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.311562 kubelet[2932]: I0128 01:26:20.311314 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c9d8b2919b6e19ba46c21279a65dfa3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-11aaf12d54\" (UID: \"4c9d8b2919b6e19ba46c21279a65dfa3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.312829 kubelet[2932]: E0128 01:26:20.312793 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-11aaf12d54?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="400ms" Jan 28 01:26:20.493764 kubelet[2932]: I0128 01:26:20.493445 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.493764 kubelet[2932]: E0128 01:26:20.493729 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.542917 containerd[1830]: time="2026-01-28T01:26:20.542871514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-11aaf12d54,Uid:4c9d8b2919b6e19ba46c21279a65dfa3,Namespace:kube-system,Attempt:0,}" Jan 28 01:26:20.543391 containerd[1830]: time="2026-01-28T01:26:20.542878994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-11aaf12d54,Uid:065a41b3675a19446d13c2cde97b19d8,Namespace:kube-system,Attempt:0,}" Jan 28 01:26:20.546031 containerd[1830]: time="2026-01-28T01:26:20.545810469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-11aaf12d54,Uid:aed1e67ed8821205dce5a582738302ac,Namespace:kube-system,Attempt:0,}" Jan 28 01:26:20.714184 kubelet[2932]: E0128 01:26:20.714137 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-11aaf12d54?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="800ms" Jan 28 01:26:20.895978 kubelet[2932]: I0128 01:26:20.895949 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.896294 kubelet[2932]: E0128 01:26:20.896266 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:20.902732 kubelet[2932]: W0128 01:26:20.902689 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-11aaf12d54&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jan 28 01:26:20.902786 kubelet[2932]: E0128 01:26:20.902747 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-11aaf12d54&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:26:21.131273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284152469.mount: Deactivated successfully. Jan 28 01:26:21.150730 containerd[1830]: time="2026-01-28T01:26:21.150683671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:26:21.152933 containerd[1830]: time="2026-01-28T01:26:21.152898187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 28 01:26:21.154992 containerd[1830]: time="2026-01-28T01:26:21.154908144Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:26:21.157893 containerd[1830]: time="2026-01-28T01:26:21.157170540Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:26:21.159574 containerd[1830]: time="2026-01-28T01:26:21.159534695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:26:21.163268 containerd[1830]: time="2026-01-28T01:26:21.162623730Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:26:21.164316 containerd[1830]: time="2026-01-28T01:26:21.164072447Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:26:21.168334 containerd[1830]: time="2026-01-28T01:26:21.168249760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:26:21.170174 containerd[1830]: time="2026-01-28T01:26:21.169671357Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 623.792928ms" Jan 28 01:26:21.170665 containerd[1830]: time="2026-01-28T01:26:21.170634596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 627.682002ms" Jan 28 01:26:21.171851 containerd[1830]: time="2026-01-28T01:26:21.171824274Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 628.76ms" Jan 28 01:26:21.315392 kubelet[2932]: W0128 01:26:21.315337 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jan 28 01:26:21.315749 kubelet[2932]: E0128 01:26:21.315399 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:26:21.400570 kubelet[2932]: W0128 01:26:21.400410 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jan 28 01:26:21.400570 kubelet[2932]: E0128 01:26:21.400474 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:26:21.422352 kubelet[2932]: W0128 01:26:21.422300 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.23:6443: connect: connection refused Jan 28 01:26:21.422467 kubelet[2932]: E0128 01:26:21.422361 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.23:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:26:21.514823 kubelet[2932]: E0128 01:26:21.514763 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-11aaf12d54?timeout=10s\": dial tcp 10.200.20.23:6443: connect: connection refused" interval="1.6s" Jan 28 01:26:21.698064 kubelet[2932]: I0128 01:26:21.697962 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:21.698548 kubelet[2932]: E0128 01:26:21.698425 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.23:6443/api/v1/nodes\": dial tcp 10.200.20.23:6443: connect: connection refused" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:21.780490 containerd[1830]: time="2026-01-28T01:26:21.780329830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:21.780956 containerd[1830]: time="2026-01-28T01:26:21.780524509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:21.780956 containerd[1830]: time="2026-01-28T01:26:21.780541949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:21.782431 containerd[1830]: time="2026-01-28T01:26:21.782265226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:21.782547 containerd[1830]: time="2026-01-28T01:26:21.782432466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:21.782547 containerd[1830]: time="2026-01-28T01:26:21.782362826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:21.782547 containerd[1830]: time="2026-01-28T01:26:21.782490906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:21.782643 containerd[1830]: time="2026-01-28T01:26:21.782615945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:21.782877 containerd[1830]: time="2026-01-28T01:26:21.782839385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:21.782980 containerd[1830]: time="2026-01-28T01:26:21.782943665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:21.786127 containerd[1830]: time="2026-01-28T01:26:21.786061899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:21.786911 containerd[1830]: time="2026-01-28T01:26:21.786826458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:21.845334 containerd[1830]: time="2026-01-28T01:26:21.845287114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-11aaf12d54,Uid:4c9d8b2919b6e19ba46c21279a65dfa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"165d68fd16b1ccd4a06c45b47af8866a17f979244c2acbd36e6d1bbab9ad5a09\"" Jan 28 01:26:21.851764 containerd[1830]: time="2026-01-28T01:26:21.851620182Z" level=info msg="CreateContainer within sandbox \"165d68fd16b1ccd4a06c45b47af8866a17f979244c2acbd36e6d1bbab9ad5a09\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:26:21.856752 containerd[1830]: time="2026-01-28T01:26:21.856710573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-11aaf12d54,Uid:aed1e67ed8821205dce5a582738302ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"6050937cb104d975775fb386fe20287fa30abe362b578dc029d776935b53da09\"" Jan 28 01:26:21.859341 containerd[1830]: time="2026-01-28T01:26:21.859316529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-11aaf12d54,Uid:065a41b3675a19446d13c2cde97b19d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"dedf382c6821136fc7253a5957ee6d2fd7cba7055bce8a070f871df2ab927a08\"" Jan 28 01:26:21.860106 containerd[1830]: time="2026-01-28T01:26:21.860083367Z" level=info msg="CreateContainer within sandbox \"6050937cb104d975775fb386fe20287fa30abe362b578dc029d776935b53da09\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:26:21.862267 containerd[1830]: time="2026-01-28T01:26:21.862234084Z" level=info msg="CreateContainer within sandbox \"dedf382c6821136fc7253a5957ee6d2fd7cba7055bce8a070f871df2ab927a08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:26:21.929740 containerd[1830]: time="2026-01-28T01:26:21.929693723Z" level=info msg="CreateContainer within sandbox \"165d68fd16b1ccd4a06c45b47af8866a17f979244c2acbd36e6d1bbab9ad5a09\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e8b5186f8bf117a70f751083b38c73d6f8568df224bf6d6b709846d6b2aa13ef\"" Jan 28 01:26:21.930345 containerd[1830]: time="2026-01-28T01:26:21.930321722Z" level=info msg="StartContainer for \"e8b5186f8bf117a70f751083b38c73d6f8568df224bf6d6b709846d6b2aa13ef\"" Jan 28 01:26:21.934786 containerd[1830]: time="2026-01-28T01:26:21.934609835Z" level=info msg="CreateContainer within sandbox \"6050937cb104d975775fb386fe20287fa30abe362b578dc029d776935b53da09\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"783ba2d35b05fb4d3ae20f90a899f3506a9b43d06b96f193cdd1e17e8ee966df\"" Jan 28 01:26:21.935341 containerd[1830]: time="2026-01-28T01:26:21.935290593Z" level=info msg="StartContainer for \"783ba2d35b05fb4d3ae20f90a899f3506a9b43d06b96f193cdd1e17e8ee966df\"" Jan 28 01:26:21.935848 containerd[1830]: time="2026-01-28T01:26:21.935816633Z" level=info msg="CreateContainer within sandbox \"dedf382c6821136fc7253a5957ee6d2fd7cba7055bce8a070f871df2ab927a08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"686bf6b6c499c6451d0a120a8f197216c78d2479f43f6dacce07c72d55f2eae3\"" Jan 28 01:26:21.936387 containerd[1830]: time="2026-01-28T01:26:21.936362192Z" level=info msg="StartContainer for \"686bf6b6c499c6451d0a120a8f197216c78d2479f43f6dacce07c72d55f2eae3\"" Jan 28 01:26:22.036210 containerd[1830]: time="2026-01-28T01:26:22.035035496Z" level=info msg="StartContainer for \"e8b5186f8bf117a70f751083b38c73d6f8568df224bf6d6b709846d6b2aa13ef\" returns successfully" Jan 28 01:26:22.036210 containerd[1830]: time="2026-01-28T01:26:22.035058776Z" level=info msg="StartContainer for \"783ba2d35b05fb4d3ae20f90a899f3506a9b43d06b96f193cdd1e17e8ee966df\" returns successfully" Jan 28 01:26:22.036210 containerd[1830]: time="2026-01-28T01:26:22.035105856Z" level=info msg="StartContainer for \"686bf6b6c499c6451d0a120a8f197216c78d2479f43f6dacce07c72d55f2eae3\" returns successfully" Jan 28 01:26:22.149211 kubelet[2932]: E0128 01:26:22.148722 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:22.155137 kubelet[2932]: E0128 01:26:22.154980 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:22.156153 kubelet[2932]: E0128 01:26:22.155267 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:23.157562 kubelet[2932]: E0128 01:26:23.157527 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:23.157873 kubelet[2932]: E0128 01:26:23.157834 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:23.302318 kubelet[2932]: I0128 01:26:23.302288 2932 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:23.975272 kubelet[2932]: E0128 01:26:23.975236 2932 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-11aaf12d54\" not found" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.087165 kubelet[2932]: I0128 01:26:24.085890 2932 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.087165 kubelet[2932]: E0128 01:26:24.085927 2932 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-11aaf12d54\": node \"ci-4081.3.6-n-11aaf12d54\" not found" Jan 28 01:26:24.097193 kubelet[2932]: I0128 01:26:24.096854 2932 apiserver.go:52] "Watching apiserver" Jan 28 01:26:24.111297 kubelet[2932]: I0128 01:26:24.111250 2932 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:26:24.111297 kubelet[2932]: I0128 01:26:24.111269 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.125251 kubelet[2932]: E0128 01:26:24.125215 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-11aaf12d54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.125251 kubelet[2932]: I0128 01:26:24.125248 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.131543 kubelet[2932]: E0128 01:26:24.131356 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.131543 kubelet[2932]: I0128 01:26:24.131385 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.133243 kubelet[2932]: E0128 01:26:24.133201 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-11aaf12d54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.158386 kubelet[2932]: I0128 01:26:24.158270 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.160496 kubelet[2932]: I0128 01:26:24.160272 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.164649 kubelet[2932]: E0128 01:26:24.164537 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-11aaf12d54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:24.166295 kubelet[2932]: E0128 01:26:24.164986 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-11aaf12d54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:26.293094 systemd[1]: Reloading requested from client PID 3208 ('systemctl') (unit session-9.scope)... Jan 28 01:26:26.293161 systemd[1]: Reloading... Jan 28 01:26:26.376197 zram_generator::config[3248]: No configuration found. Jan 28 01:26:26.502381 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:26:26.594526 systemd[1]: Reloading finished in 300 ms. Jan 28 01:26:26.621022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:26.641690 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:26:26.641972 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:26.654410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:26:26.763497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:26:26.773201 (kubelet)[3322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:26:26.818172 kubelet[3322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:26:26.818172 kubelet[3322]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:26:26.818172 kubelet[3322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:26:26.818172 kubelet[3322]: I0128 01:26:26.817819 3322 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:26:26.831407 kubelet[3322]: I0128 01:26:26.831362 3322 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:26:26.831407 kubelet[3322]: I0128 01:26:26.831387 3322 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:26:26.831895 kubelet[3322]: I0128 01:26:26.831873 3322 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:26:26.833079 kubelet[3322]: I0128 01:26:26.833062 3322 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 01:26:26.836377 kubelet[3322]: I0128 01:26:26.836352 3322 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:26:26.841183 kubelet[3322]: E0128 01:26:26.841142 3322 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:26:26.843119 kubelet[3322]: I0128 01:26:26.841602 3322 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:26:26.847539 kubelet[3322]: I0128 01:26:26.846110 3322 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:26:26.847539 kubelet[3322]: I0128 01:26:26.846643 3322 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:26:26.847955 kubelet[3322]: I0128 01:26:26.846665 3322 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-11aaf12d54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 01:26:26.848119 kubelet[3322]: I0128 01:26:26.848108 3322 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:26:26.848208 kubelet[3322]: I0128 01:26:26.848198 3322 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:26:26.848351 kubelet[3322]: I0128 01:26:26.848341 3322 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:26:26.849709 kubelet[3322]: I0128 01:26:26.849691 3322 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:26:26.849927 kubelet[3322]: I0128 01:26:26.849915 3322 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:26:26.850224 kubelet[3322]: I0128 01:26:26.850209 3322 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:26:26.851186 kubelet[3322]: I0128 01:26:26.850302 3322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:26:26.857334 kubelet[3322]: I0128 01:26:26.856341 3322 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:26:26.857334 kubelet[3322]: I0128 01:26:26.857004 3322 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:26:26.857470 kubelet[3322]: I0128 01:26:26.857420 3322 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:26:26.857470 kubelet[3322]: I0128 01:26:26.857448 3322 server.go:1287] "Started kubelet" Jan 28 01:26:26.863552 kubelet[3322]: I0128 01:26:26.863529 3322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:26:26.870300 kubelet[3322]: I0128 01:26:26.870194 3322 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:26:26.871436 kubelet[3322]: I0128 01:26:26.870930 3322 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:26:26.872105 kubelet[3322]: I0128 01:26:26.871824 3322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:26:26.872105 kubelet[3322]: I0128 01:26:26.871998 3322 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:26:26.872544 kubelet[3322]: I0128 01:26:26.872214 3322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:26:26.880153 kubelet[3322]: I0128 01:26:26.876236 3322 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:26:26.880153 kubelet[3322]: E0128 01:26:26.876427 3322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-11aaf12d54\" not found" Jan 28 01:26:26.880153 kubelet[3322]: I0128 01:26:26.876685 3322 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:26:26.880153 kubelet[3322]: I0128 01:26:26.876797 3322 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:26:26.880908 kubelet[3322]: I0128 01:26:26.880887 3322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:26:26.891162 kubelet[3322]: I0128 01:26:26.889303 3322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:26:26.893565 kubelet[3322]: I0128 01:26:26.893543 3322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:26:26.893668 kubelet[3322]: I0128 01:26:26.893660 3322 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:26:26.893740 kubelet[3322]: I0128 01:26:26.893731 3322 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:26:26.894198 kubelet[3322]: I0128 01:26:26.894183 3322 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:26:26.895433 kubelet[3322]: E0128 01:26:26.895336 3322 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:26:26.898365 kubelet[3322]: E0128 01:26:26.889804 3322 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:26:26.908158 kubelet[3322]: I0128 01:26:26.890102 3322 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:26:26.908158 kubelet[3322]: I0128 01:26:26.907211 3322 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:26:26.998355 kubelet[3322]: E0128 01:26:26.998289 3322 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:26:27.014558 kubelet[3322]: I0128 01:26:27.014526 3322 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:26:27.014558 kubelet[3322]: I0128 01:26:27.014547 3322 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:26:27.014558 kubelet[3322]: I0128 01:26:27.014567 3322 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:26:27.014739 kubelet[3322]: I0128 01:26:27.014722 3322 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:26:27.014766 kubelet[3322]: I0128 01:26:27.014736 3322 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:26:27.014766 kubelet[3322]: I0128 01:26:27.014756 3322 policy_none.go:49] "None policy: Start" Jan 28 01:26:27.014766 kubelet[3322]: I0128 01:26:27.014765 3322 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:26:27.014834 kubelet[3322]: I0128 01:26:27.014773 3322 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:26:27.014873 kubelet[3322]: I0128 01:26:27.014860 3322 state_mem.go:75] "Updated machine memory state" Jan 28 01:26:27.015872 kubelet[3322]: I0128 01:26:27.015855 3322 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:26:27.016200 kubelet[3322]: I0128 01:26:27.016013 3322 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:26:27.016200 kubelet[3322]: I0128 01:26:27.016030 3322 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:26:27.018711 kubelet[3322]: I0128 01:26:27.017356 3322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:26:27.019434 kubelet[3322]: E0128 01:26:27.019411 3322 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:26:27.207228 kubelet[3322]: I0128 01:26:27.128516 3322 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.207228 kubelet[3322]: I0128 01:26:27.141083 3322 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.207228 kubelet[3322]: I0128 01:26:27.199499 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.207228 kubelet[3322]: I0128 01:26:27.199539 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.207228 kubelet[3322]: I0128 01:26:27.199837 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.207799 kubelet[3322]: I0128 01:26:27.207479 3322 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.210101 kubelet[3322]: W0128 01:26:27.209911 3322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 01:26:27.215548 kubelet[3322]: W0128 01:26:27.215518 3322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 01:26:27.215645 kubelet[3322]: W0128 01:26:27.215518 3322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 01:26:27.281164 kubelet[3322]: I0128 01:26:27.278915 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.281164 kubelet[3322]: I0128 01:26:27.278956 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.281164 kubelet[3322]: I0128 01:26:27.278973 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c9d8b2919b6e19ba46c21279a65dfa3-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-11aaf12d54\" (UID: \"4c9d8b2919b6e19ba46c21279a65dfa3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.281164 kubelet[3322]: I0128 01:26:27.278989 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c9d8b2919b6e19ba46c21279a65dfa3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-11aaf12d54\" (UID: \"4c9d8b2919b6e19ba46c21279a65dfa3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.281164 kubelet[3322]: I0128 01:26:27.279005 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c9d8b2919b6e19ba46c21279a65dfa3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-11aaf12d54\" (UID: \"4c9d8b2919b6e19ba46c21279a65dfa3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.281380 kubelet[3322]: I0128 01:26:27.279024 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.281380 kubelet[3322]: I0128 01:26:27.279041 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.281380 kubelet[3322]: I0128 01:26:27.279055 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/065a41b3675a19446d13c2cde97b19d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" (UID: \"065a41b3675a19446d13c2cde97b19d8\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.281380 kubelet[3322]: I0128 01:26:27.279072 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aed1e67ed8821205dce5a582738302ac-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-11aaf12d54\" (UID: \"aed1e67ed8821205dce5a582738302ac\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.856740 kubelet[3322]: I0128 01:26:27.856260 3322 apiserver.go:52] "Watching apiserver" Jan 28 01:26:27.877443 kubelet[3322]: I0128 01:26:27.877408 3322 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:26:27.929360 kubelet[3322]: I0128 01:26:27.929204 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" podStartSLOduration=0.92918498 podStartE2EDuration="929.18498ms" podCreationTimestamp="2026-01-28 01:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:26:27.909508808 +0000 UTC m=+1.133454168" watchObservedRunningTime="2026-01-28 01:26:27.92918498 +0000 UTC m=+1.153130380" Jan 28 01:26:27.942701 kubelet[3322]: I0128 01:26:27.942644 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-11aaf12d54" podStartSLOduration=0.942517161 podStartE2EDuration="942.517161ms" podCreationTimestamp="2026-01-28 01:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:26:27.929909099 +0000 UTC m=+1.153854459" watchObservedRunningTime="2026-01-28 01:26:27.942517161 +0000 UTC m=+1.166462521" Jan 28 01:26:27.959933 kubelet[3322]: I0128 01:26:27.959882 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-11aaf12d54" podStartSLOduration=0.959862536 podStartE2EDuration="959.862536ms" podCreationTimestamp="2026-01-28 01:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:26:27.943321639 +0000 UTC m=+1.167266999" watchObservedRunningTime="2026-01-28 01:26:27.959862536 +0000 UTC m=+1.183807896" Jan 28 01:26:27.971678 kubelet[3322]: I0128 01:26:27.971641 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:27.985185 kubelet[3322]: W0128 01:26:27.984822 3322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 01:26:27.985185 kubelet[3322]: E0128 01:26:27.984885 3322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-11aaf12d54\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-11aaf12d54" Jan 28 01:26:31.442370 kubelet[3322]: I0128 01:26:31.442340 3322 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:26:31.442738 containerd[1830]: time="2026-01-28T01:26:31.442592918Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:26:31.442915 kubelet[3322]: I0128 01:26:31.442737 3322 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:26:32.508007 kubelet[3322]: I0128 01:26:32.507963 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7b766ff-17b3-4b63-8916-70bb9b6714e0-kube-proxy\") pod \"kube-proxy-zbbx9\" (UID: \"e7b766ff-17b3-4b63-8916-70bb9b6714e0\") " pod="kube-system/kube-proxy-zbbx9" Jan 28 01:26:32.508382 kubelet[3322]: I0128 01:26:32.508024 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7b766ff-17b3-4b63-8916-70bb9b6714e0-xtables-lock\") pod \"kube-proxy-zbbx9\" (UID: \"e7b766ff-17b3-4b63-8916-70bb9b6714e0\") " pod="kube-system/kube-proxy-zbbx9" Jan 28 01:26:32.508382 kubelet[3322]: I0128 01:26:32.508045 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7b766ff-17b3-4b63-8916-70bb9b6714e0-lib-modules\") pod \"kube-proxy-zbbx9\" (UID: \"e7b766ff-17b3-4b63-8916-70bb9b6714e0\") " pod="kube-system/kube-proxy-zbbx9" Jan 28 01:26:32.508382 kubelet[3322]: I0128 01:26:32.508061 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x8m9\" (UniqueName: \"kubernetes.io/projected/e7b766ff-17b3-4b63-8916-70bb9b6714e0-kube-api-access-6x8m9\") pod \"kube-proxy-zbbx9\" (UID: \"e7b766ff-17b3-4b63-8916-70bb9b6714e0\") " pod="kube-system/kube-proxy-zbbx9" Jan 28 01:26:32.610194 kubelet[3322]: I0128 01:26:32.609218 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/520937b3-f43d-4349-80bf-3100f2df0fb1-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jw6qm\" (UID: \"520937b3-f43d-4349-80bf-3100f2df0fb1\") " pod="tigera-operator/tigera-operator-7dcd859c48-jw6qm" Jan 28 01:26:32.612140 kubelet[3322]: I0128 01:26:32.612120 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5pfk\" (UniqueName: \"kubernetes.io/projected/520937b3-f43d-4349-80bf-3100f2df0fb1-kube-api-access-r5pfk\") pod \"tigera-operator-7dcd859c48-jw6qm\" (UID: \"520937b3-f43d-4349-80bf-3100f2df0fb1\") " pod="tigera-operator/tigera-operator-7dcd859c48-jw6qm" Jan 28 01:26:32.740735 containerd[1830]: time="2026-01-28T01:26:32.740698689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbbx9,Uid:e7b766ff-17b3-4b63-8916-70bb9b6714e0,Namespace:kube-system,Attempt:0,}" Jan 28 01:26:32.782012 containerd[1830]: time="2026-01-28T01:26:32.781724350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:32.782012 containerd[1830]: time="2026-01-28T01:26:32.781809910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:32.782012 containerd[1830]: time="2026-01-28T01:26:32.781834550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:32.782234 containerd[1830]: time="2026-01-28T01:26:32.781932390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:32.814634 containerd[1830]: time="2026-01-28T01:26:32.814571744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbbx9,Uid:e7b766ff-17b3-4b63-8916-70bb9b6714e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"398d367737a14276f541209f5e15e7f683fa4cf13cb44f0a7d6b0a071f60b709\"" Jan 28 01:26:32.817850 containerd[1830]: time="2026-01-28T01:26:32.817707019Z" level=info msg="CreateContainer within sandbox \"398d367737a14276f541209f5e15e7f683fa4cf13cb44f0a7d6b0a071f60b709\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:26:32.863461 containerd[1830]: time="2026-01-28T01:26:32.863340234Z" level=info msg="CreateContainer within sandbox \"398d367737a14276f541209f5e15e7f683fa4cf13cb44f0a7d6b0a071f60b709\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f32346b8a9265a67835c39f8f2c44eb441423373e7950878d316f21641889bad\"" Jan 28 01:26:32.866086 containerd[1830]: time="2026-01-28T01:26:32.864662993Z" level=info msg="StartContainer for \"f32346b8a9265a67835c39f8f2c44eb441423373e7950878d316f21641889bad\"" Jan 28 01:26:32.901812 containerd[1830]: time="2026-01-28T01:26:32.901774900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jw6qm,Uid:520937b3-f43d-4349-80bf-3100f2df0fb1,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:26:32.921021 containerd[1830]: time="2026-01-28T01:26:32.920839513Z" level=info msg="StartContainer for \"f32346b8a9265a67835c39f8f2c44eb441423373e7950878d316f21641889bad\" returns successfully" Jan 28 01:26:32.944115 containerd[1830]: time="2026-01-28T01:26:32.944023320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:32.944426 containerd[1830]: time="2026-01-28T01:26:32.944320719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:32.944621 containerd[1830]: time="2026-01-28T01:26:32.944337239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:32.945112 containerd[1830]: time="2026-01-28T01:26:32.944962879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:32.997000 containerd[1830]: time="2026-01-28T01:26:32.996957765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jw6qm,Uid:520937b3-f43d-4349-80bf-3100f2df0fb1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f636abcef1f92e536c0bac6120cbe1d4d39895090ba9fab9a73666de2d180b1e\"" Jan 28 01:26:33.001892 kubelet[3322]: I0128 01:26:33.001482 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zbbx9" podStartSLOduration=1.001464358 podStartE2EDuration="1.001464358s" podCreationTimestamp="2026-01-28 01:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:26:33.000935359 +0000 UTC m=+6.224880719" watchObservedRunningTime="2026-01-28 01:26:33.001464358 +0000 UTC m=+6.225409718" Jan 28 01:26:33.004044 containerd[1830]: time="2026-01-28T01:26:33.003904515Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:26:34.813686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount836964861.mount: Deactivated successfully. Jan 28 01:26:35.529970 containerd[1830]: time="2026-01-28T01:26:35.529917326Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:35.532198 containerd[1830]: time="2026-01-28T01:26:35.532169083Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 28 01:26:35.535885 containerd[1830]: time="2026-01-28T01:26:35.535847358Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:35.540283 containerd[1830]: time="2026-01-28T01:26:35.540213472Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:26:35.540849 containerd[1830]: time="2026-01-28T01:26:35.540818471Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.536877796s" Jan 28 01:26:35.540904 containerd[1830]: time="2026-01-28T01:26:35.540849351Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 28 01:26:35.544452 containerd[1830]: time="2026-01-28T01:26:35.544417026Z" level=info msg="CreateContainer within sandbox \"f636abcef1f92e536c0bac6120cbe1d4d39895090ba9fab9a73666de2d180b1e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:26:35.574148 containerd[1830]: time="2026-01-28T01:26:35.574097623Z" level=info msg="CreateContainer within sandbox \"f636abcef1f92e536c0bac6120cbe1d4d39895090ba9fab9a73666de2d180b1e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0ba9d7fea65b25319cd17b190be938870cff6aae6a36f8f1ccc4331bf0007140\"" Jan 28 01:26:35.575980 containerd[1830]: time="2026-01-28T01:26:35.575953541Z" level=info msg="StartContainer for \"0ba9d7fea65b25319cd17b190be938870cff6aae6a36f8f1ccc4331bf0007140\"" Jan 28 01:26:35.630028 containerd[1830]: time="2026-01-28T01:26:35.629920664Z" level=info msg="StartContainer for \"0ba9d7fea65b25319cd17b190be938870cff6aae6a36f8f1ccc4331bf0007140\" returns successfully" Jan 28 01:26:38.973364 kubelet[3322]: I0128 01:26:38.973253 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jw6qm" podStartSLOduration=4.434704 podStartE2EDuration="6.973237594s" podCreationTimestamp="2026-01-28 01:26:32 +0000 UTC" firstStartedPulling="2026-01-28 01:26:33.003405315 +0000 UTC m=+6.227350675" lastFinishedPulling="2026-01-28 01:26:35.541938909 +0000 UTC m=+8.765884269" observedRunningTime="2026-01-28 01:26:36.001790056 +0000 UTC m=+9.225735416" watchObservedRunningTime="2026-01-28 01:26:38.973237594 +0000 UTC m=+12.197182954" Jan 28 01:26:41.527138 sudo[2371]: pam_unix(sudo:session): session closed for user root Jan 28 01:26:41.603929 sshd[2367]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:41.610439 systemd[1]: sshd@6-10.200.20.23:22-10.200.16.10:57116.service: Deactivated successfully. Jan 28 01:26:41.617887 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:26:41.621063 systemd-logind[1791]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:26:41.622749 systemd-logind[1791]: Removed session 9. Jan 28 01:26:52.927788 kubelet[3322]: I0128 01:26:52.927619 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5cd\" (UniqueName: \"kubernetes.io/projected/00d67702-56f6-4aac-be14-35c632a359bc-kube-api-access-wn5cd\") pod \"calico-typha-57b4bf9746-mt57n\" (UID: \"00d67702-56f6-4aac-be14-35c632a359bc\") " pod="calico-system/calico-typha-57b4bf9746-mt57n" Jan 28 01:26:52.927788 kubelet[3322]: I0128 01:26:52.927661 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00d67702-56f6-4aac-be14-35c632a359bc-tigera-ca-bundle\") pod \"calico-typha-57b4bf9746-mt57n\" (UID: \"00d67702-56f6-4aac-be14-35c632a359bc\") " pod="calico-system/calico-typha-57b4bf9746-mt57n" Jan 28 01:26:52.927788 kubelet[3322]: I0128 01:26:52.927680 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/00d67702-56f6-4aac-be14-35c632a359bc-typha-certs\") pod \"calico-typha-57b4bf9746-mt57n\" (UID: \"00d67702-56f6-4aac-be14-35c632a359bc\") " pod="calico-system/calico-typha-57b4bf9746-mt57n" Jan 28 01:26:54.251823 kubelet[3322]: E0128 01:26:54.251747 3322 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.356s" Jan 28 01:26:54.253364 kubelet[3322]: E0128 01:26:54.252241 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:26:54.336838 kubelet[3322]: I0128 01:26:54.336799 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc0e425d-28f5-415a-bd69-ce03688c5e78-xtables-lock\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.336838 kubelet[3322]: I0128 01:26:54.336843 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a3f2b82-9dfb-45f4-8480-07421e1f39e6-registration-dir\") pod \"csi-node-driver-7cvf4\" (UID: \"0a3f2b82-9dfb-45f4-8480-07421e1f39e6\") " pod="calico-system/csi-node-driver-7cvf4" Jan 28 01:26:54.336997 kubelet[3322]: I0128 01:26:54.336865 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cc0e425d-28f5-415a-bd69-ce03688c5e78-node-certs\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.336997 kubelet[3322]: I0128 01:26:54.336880 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cc0e425d-28f5-415a-bd69-ce03688c5e78-cni-log-dir\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.336997 kubelet[3322]: I0128 01:26:54.336895 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc0e425d-28f5-415a-bd69-ce03688c5e78-tigera-ca-bundle\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.336997 kubelet[3322]: I0128 01:26:54.336910 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cc0e425d-28f5-415a-bd69-ce03688c5e78-var-run-calico\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.336997 kubelet[3322]: I0128 01:26:54.336926 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nw9b\" (UniqueName: \"kubernetes.io/projected/cc0e425d-28f5-415a-bd69-ce03688c5e78-kube-api-access-5nw9b\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.337108 kubelet[3322]: I0128 01:26:54.336943 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a3f2b82-9dfb-45f4-8480-07421e1f39e6-kubelet-dir\") pod \"csi-node-driver-7cvf4\" (UID: \"0a3f2b82-9dfb-45f4-8480-07421e1f39e6\") " pod="calico-system/csi-node-driver-7cvf4" Jan 28 01:26:54.337108 kubelet[3322]: I0128 01:26:54.336961 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cc0e425d-28f5-415a-bd69-ce03688c5e78-cni-bin-dir\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.337108 kubelet[3322]: I0128 01:26:54.336975 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc0e425d-28f5-415a-bd69-ce03688c5e78-lib-modules\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.337108 kubelet[3322]: I0128 01:26:54.336990 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cc0e425d-28f5-415a-bd69-ce03688c5e78-cni-net-dir\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.337108 kubelet[3322]: I0128 01:26:54.337009 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a3f2b82-9dfb-45f4-8480-07421e1f39e6-socket-dir\") pod \"csi-node-driver-7cvf4\" (UID: \"0a3f2b82-9dfb-45f4-8480-07421e1f39e6\") " pod="calico-system/csi-node-driver-7cvf4" Jan 28 01:26:54.337235 kubelet[3322]: I0128 01:26:54.337024 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t26q2\" (UniqueName: \"kubernetes.io/projected/0a3f2b82-9dfb-45f4-8480-07421e1f39e6-kube-api-access-t26q2\") pod \"csi-node-driver-7cvf4\" (UID: \"0a3f2b82-9dfb-45f4-8480-07421e1f39e6\") " pod="calico-system/csi-node-driver-7cvf4" Jan 28 01:26:54.337235 kubelet[3322]: I0128 01:26:54.337052 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cc0e425d-28f5-415a-bd69-ce03688c5e78-flexvol-driver-host\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.337235 kubelet[3322]: I0128 01:26:54.337067 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cc0e425d-28f5-415a-bd69-ce03688c5e78-policysync\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.337235 kubelet[3322]: I0128 01:26:54.337083 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cc0e425d-28f5-415a-bd69-ce03688c5e78-var-lib-calico\") pod \"calico-node-ds9m9\" (UID: \"cc0e425d-28f5-415a-bd69-ce03688c5e78\") " pod="calico-system/calico-node-ds9m9" Jan 28 01:26:54.337235 kubelet[3322]: I0128 01:26:54.337097 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0a3f2b82-9dfb-45f4-8480-07421e1f39e6-varrun\") pod \"csi-node-driver-7cvf4\" (UID: \"0a3f2b82-9dfb-45f4-8480-07421e1f39e6\") " pod="calico-system/csi-node-driver-7cvf4" Jan 28 01:26:54.450600 kubelet[3322]: E0128 01:26:54.447007 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:26:54.450600 kubelet[3322]: W0128 01:26:54.447028 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:26:54.450600 kubelet[3322]: E0128 01:26:54.447048 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:26:54.456961 kubelet[3322]: E0128 01:26:54.456940 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:26:54.457212 kubelet[3322]: W0128 01:26:54.457197 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:26:54.457295 kubelet[3322]: E0128 01:26:54.457283 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:26:54.462439 kubelet[3322]: E0128 01:26:54.462419 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:26:54.462439 kubelet[3322]: W0128 01:26:54.462435 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:26:54.462547 kubelet[3322]: E0128 01:26:54.462451 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:26:54.553239 containerd[1830]: time="2026-01-28T01:26:54.552862115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57b4bf9746-mt57n,Uid:00d67702-56f6-4aac-be14-35c632a359bc,Namespace:calico-system,Attempt:0,}" Jan 28 01:26:54.556780 containerd[1830]: time="2026-01-28T01:26:54.556525430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ds9m9,Uid:cc0e425d-28f5-415a-bd69-ce03688c5e78,Namespace:calico-system,Attempt:0,}" Jan 28 01:26:55.062064 containerd[1830]: time="2026-01-28T01:26:55.061865877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:55.062064 containerd[1830]: time="2026-01-28T01:26:55.061911677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:55.062064 containerd[1830]: time="2026-01-28T01:26:55.061922157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:55.062064 containerd[1830]: time="2026-01-28T01:26:55.061998397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:55.103789 containerd[1830]: time="2026-01-28T01:26:55.103752138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57b4bf9746-mt57n,Uid:00d67702-56f6-4aac-be14-35c632a359bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f91603b1f21930c0faf19212c3400e2b3bf04cb7e5d5b19daa1a246b6d3acca\"" Jan 28 01:26:55.105778 containerd[1830]: time="2026-01-28T01:26:55.105702056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:26:55.473201 containerd[1830]: time="2026-01-28T01:26:55.473040138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:26:55.473415 containerd[1830]: time="2026-01-28T01:26:55.473246097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:26:55.473415 containerd[1830]: time="2026-01-28T01:26:55.473270017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:55.474012 containerd[1830]: time="2026-01-28T01:26:55.473853696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:26:55.507859 containerd[1830]: time="2026-01-28T01:26:55.507818969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ds9m9,Uid:cc0e425d-28f5-415a-bd69-ce03688c5e78,Namespace:calico-system,Attempt:0,} returns sandbox id \"a26602e20a5940788824b27212ca169c15d893feb1eacb8289e83528f875cd5c\"" Jan 28 01:26:55.895598 kubelet[3322]: E0128 01:26:55.895558 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:26:57.896287 kubelet[3322]: E0128 01:26:57.896242 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:26:59.525016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215213651.mount: Deactivated successfully. Jan 28 01:26:59.897013 kubelet[3322]: E0128 01:26:59.896969 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:00.865016 containerd[1830]: time="2026-01-28T01:27:00.864965731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:00.867175 containerd[1830]: time="2026-01-28T01:27:00.867127488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 28 01:27:00.869851 containerd[1830]: time="2026-01-28T01:27:00.869819244Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:00.875325 containerd[1830]: time="2026-01-28T01:27:00.875280477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:00.876451 containerd[1830]: time="2026-01-28T01:27:00.875958716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 5.7702275s" Jan 28 01:27:00.876451 containerd[1830]: time="2026-01-28T01:27:00.875988996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 28 01:27:00.876950 containerd[1830]: time="2026-01-28T01:27:00.876922835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:27:00.891958 containerd[1830]: time="2026-01-28T01:27:00.891903336Z" level=info msg="CreateContainer within sandbox \"2f91603b1f21930c0faf19212c3400e2b3bf04cb7e5d5b19daa1a246b6d3acca\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:27:00.931587 containerd[1830]: time="2026-01-28T01:27:00.931539365Z" level=info msg="CreateContainer within sandbox \"2f91603b1f21930c0faf19212c3400e2b3bf04cb7e5d5b19daa1a246b6d3acca\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1fd9ca064fae0793dcfff04f3ffa7980fd3172fd332aa657bf0c39e7763e6369\"" Jan 28 01:27:00.933370 containerd[1830]: time="2026-01-28T01:27:00.933335803Z" level=info msg="StartContainer for \"1fd9ca064fae0793dcfff04f3ffa7980fd3172fd332aa657bf0c39e7763e6369\"" Jan 28 01:27:00.991282 containerd[1830]: time="2026-01-28T01:27:00.991240208Z" level=info msg="StartContainer for \"1fd9ca064fae0793dcfff04f3ffa7980fd3172fd332aa657bf0c39e7763e6369\" returns successfully" Jan 28 01:27:01.062160 kubelet[3322]: E0128 01:27:01.061958 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.062160 kubelet[3322]: W0128 01:27:01.061982 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.062160 kubelet[3322]: E0128 01:27:01.062003 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.062828 kubelet[3322]: E0128 01:27:01.062437 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.062828 kubelet[3322]: W0128 01:27:01.062620 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.062828 kubelet[3322]: E0128 01:27:01.062667 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.063514 kubelet[3322]: E0128 01:27:01.063385 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.063514 kubelet[3322]: W0128 01:27:01.063401 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.063514 kubelet[3322]: E0128 01:27:01.063414 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.063900 kubelet[3322]: E0128 01:27:01.063838 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.064051 kubelet[3322]: W0128 01:27:01.063978 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.064051 kubelet[3322]: E0128 01:27:01.063999 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.064615 kubelet[3322]: E0128 01:27:01.064330 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.064615 kubelet[3322]: W0128 01:27:01.064345 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.064615 kubelet[3322]: E0128 01:27:01.064358 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.065661 kubelet[3322]: E0128 01:27:01.065530 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.065661 kubelet[3322]: W0128 01:27:01.065545 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.065661 kubelet[3322]: E0128 01:27:01.065558 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.066222 kubelet[3322]: E0128 01:27:01.066119 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.066222 kubelet[3322]: W0128 01:27:01.066160 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.066222 kubelet[3322]: E0128 01:27:01.066174 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.066706 kubelet[3322]: E0128 01:27:01.066599 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.066706 kubelet[3322]: W0128 01:27:01.066612 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.066706 kubelet[3322]: E0128 01:27:01.066623 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.067540 kubelet[3322]: E0128 01:27:01.067109 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.067540 kubelet[3322]: W0128 01:27:01.067122 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.067540 kubelet[3322]: E0128 01:27:01.067134 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.068298 kubelet[3322]: E0128 01:27:01.067958 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.068298 kubelet[3322]: W0128 01:27:01.067974 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.068298 kubelet[3322]: E0128 01:27:01.067986 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.069056 kubelet[3322]: E0128 01:27:01.068791 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.069056 kubelet[3322]: W0128 01:27:01.068808 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.069056 kubelet[3322]: E0128 01:27:01.068823 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.069739 kubelet[3322]: E0128 01:27:01.069467 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.069739 kubelet[3322]: W0128 01:27:01.069481 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.069739 kubelet[3322]: E0128 01:27:01.069495 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.070200 kubelet[3322]: E0128 01:27:01.070003 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.070200 kubelet[3322]: W0128 01:27:01.070018 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.070200 kubelet[3322]: E0128 01:27:01.070033 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.070690 kubelet[3322]: E0128 01:27:01.070524 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.070690 kubelet[3322]: W0128 01:27:01.070537 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.070690 kubelet[3322]: E0128 01:27:01.070549 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.071120 kubelet[3322]: E0128 01:27:01.071022 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.071120 kubelet[3322]: W0128 01:27:01.071035 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.071120 kubelet[3322]: E0128 01:27:01.071046 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.074440 kubelet[3322]: E0128 01:27:01.074339 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.074440 kubelet[3322]: W0128 01:27:01.074377 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.074440 kubelet[3322]: E0128 01:27:01.074391 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.075385 kubelet[3322]: E0128 01:27:01.075256 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.075385 kubelet[3322]: W0128 01:27:01.075272 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.075385 kubelet[3322]: E0128 01:27:01.075284 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.076241 kubelet[3322]: E0128 01:27:01.076087 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.076241 kubelet[3322]: W0128 01:27:01.076099 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.076241 kubelet[3322]: E0128 01:27:01.076111 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.079000 kubelet[3322]: E0128 01:27:01.078722 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.079000 kubelet[3322]: W0128 01:27:01.078992 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.079118 kubelet[3322]: E0128 01:27:01.079101 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.080322 kubelet[3322]: E0128 01:27:01.080299 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.080322 kubelet[3322]: W0128 01:27:01.080316 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.080470 kubelet[3322]: E0128 01:27:01.080373 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.082242 kubelet[3322]: E0128 01:27:01.082212 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.082242 kubelet[3322]: W0128 01:27:01.082237 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.082471 kubelet[3322]: E0128 01:27:01.082351 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.082563 kubelet[3322]: E0128 01:27:01.082549 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.082613 kubelet[3322]: W0128 01:27:01.082562 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.083339 kubelet[3322]: E0128 01:27:01.083310 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.083777 kubelet[3322]: E0128 01:27:01.083757 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.083777 kubelet[3322]: W0128 01:27:01.083774 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.084000 kubelet[3322]: E0128 01:27:01.083870 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.084799 kubelet[3322]: E0128 01:27:01.084773 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.084799 kubelet[3322]: W0128 01:27:01.084796 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.084940 kubelet[3322]: E0128 01:27:01.084912 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.085845 kubelet[3322]: E0128 01:27:01.085820 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.085845 kubelet[3322]: W0128 01:27:01.085843 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.085965 kubelet[3322]: E0128 01:27:01.085940 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.086947 kubelet[3322]: E0128 01:27:01.086924 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.086947 kubelet[3322]: W0128 01:27:01.086944 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.087264 kubelet[3322]: E0128 01:27:01.087040 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.087672 kubelet[3322]: E0128 01:27:01.087652 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.087672 kubelet[3322]: W0128 01:27:01.087671 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.088266 kubelet[3322]: E0128 01:27:01.088244 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.088786 kubelet[3322]: E0128 01:27:01.088765 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.088838 kubelet[3322]: W0128 01:27:01.088785 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.088906 kubelet[3322]: E0128 01:27:01.088875 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.089895 kubelet[3322]: E0128 01:27:01.089869 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.089895 kubelet[3322]: W0128 01:27:01.089892 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.090094 kubelet[3322]: E0128 01:27:01.089991 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.090466 kubelet[3322]: E0128 01:27:01.090353 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.090466 kubelet[3322]: W0128 01:27:01.090370 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.090585 kubelet[3322]: E0128 01:27:01.090571 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.091191 kubelet[3322]: E0128 01:27:01.090891 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.091191 kubelet[3322]: W0128 01:27:01.090903 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.091191 kubelet[3322]: E0128 01:27:01.090924 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.092255 kubelet[3322]: E0128 01:27:01.092240 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.092332 kubelet[3322]: W0128 01:27:01.092321 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.092393 kubelet[3322]: E0128 01:27:01.092384 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.092658 kubelet[3322]: E0128 01:27:01.092617 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:01.092658 kubelet[3322]: W0128 01:27:01.092628 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:01.092658 kubelet[3322]: E0128 01:27:01.092638 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:01.895637 kubelet[3322]: E0128 01:27:01.895580 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:02.022715 containerd[1830]: time="2026-01-28T01:27:02.022660083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:02.025155 containerd[1830]: time="2026-01-28T01:27:02.025113279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 28 01:27:02.027712 containerd[1830]: time="2026-01-28T01:27:02.027666236Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:02.031982 containerd[1830]: time="2026-01-28T01:27:02.031951871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:02.032887 containerd[1830]: time="2026-01-28T01:27:02.032642550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.155686675s" Jan 28 01:27:02.032887 containerd[1830]: time="2026-01-28T01:27:02.032676470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 28 01:27:02.035234 containerd[1830]: time="2026-01-28T01:27:02.035096067Z" level=info msg="CreateContainer within sandbox \"a26602e20a5940788824b27212ca169c15d893feb1eacb8289e83528f875cd5c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:27:02.049238 kubelet[3322]: I0128 01:27:02.049211 3322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 01:27:02.069083 containerd[1830]: time="2026-01-28T01:27:02.068962183Z" level=info msg="CreateContainer within sandbox \"a26602e20a5940788824b27212ca169c15d893feb1eacb8289e83528f875cd5c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7be9e22da9c70c06f837c9e10e4aab239bc85721c6f16f2731b4be3257df4fb3\"" Jan 28 01:27:02.071782 containerd[1830]: time="2026-01-28T01:27:02.070296581Z" level=info msg="StartContainer for \"7be9e22da9c70c06f837c9e10e4aab239bc85721c6f16f2731b4be3257df4fb3\"" Jan 28 01:27:02.078754 kubelet[3322]: E0128 01:27:02.078729 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.079151 kubelet[3322]: W0128 01:27:02.079124 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.079247 kubelet[3322]: E0128 01:27:02.079234 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.081391 kubelet[3322]: E0128 01:27:02.081373 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.081492 kubelet[3322]: W0128 01:27:02.081481 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.081568 kubelet[3322]: E0128 01:27:02.081557 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.081797 kubelet[3322]: E0128 01:27:02.081786 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.081866 kubelet[3322]: W0128 01:27:02.081856 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.081928 kubelet[3322]: E0128 01:27:02.081918 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.082189 kubelet[3322]: E0128 01:27:02.082140 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.082268 kubelet[3322]: W0128 01:27:02.082257 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.082318 kubelet[3322]: E0128 01:27:02.082309 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.083808 kubelet[3322]: E0128 01:27:02.082541 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.083910 kubelet[3322]: W0128 01:27:02.083893 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.083971 kubelet[3322]: E0128 01:27:02.083961 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.084287 kubelet[3322]: E0128 01:27:02.084272 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.084372 kubelet[3322]: W0128 01:27:02.084361 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.084428 kubelet[3322]: E0128 01:27:02.084418 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.084638 kubelet[3322]: E0128 01:27:02.084627 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.084703 kubelet[3322]: W0128 01:27:02.084693 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.084759 kubelet[3322]: E0128 01:27:02.084749 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.085021 kubelet[3322]: E0128 01:27:02.084942 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.085021 kubelet[3322]: W0128 01:27:02.084952 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.085021 kubelet[3322]: E0128 01:27:02.084961 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.085343 kubelet[3322]: E0128 01:27:02.085329 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.085503 kubelet[3322]: W0128 01:27:02.085419 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.085503 kubelet[3322]: E0128 01:27:02.085435 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.085717 kubelet[3322]: E0128 01:27:02.085633 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.085717 kubelet[3322]: W0128 01:27:02.085651 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.085717 kubelet[3322]: E0128 01:27:02.085662 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.086101 kubelet[3322]: E0128 01:27:02.085988 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.086101 kubelet[3322]: W0128 01:27:02.086000 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.086101 kubelet[3322]: E0128 01:27:02.086012 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.086364 kubelet[3322]: E0128 01:27:02.086312 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.086364 kubelet[3322]: W0128 01:27:02.086323 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.086364 kubelet[3322]: E0128 01:27:02.086333 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.086760 kubelet[3322]: E0128 01:27:02.086661 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.086760 kubelet[3322]: W0128 01:27:02.086672 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.086760 kubelet[3322]: E0128 01:27:02.086681 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.086973 kubelet[3322]: E0128 01:27:02.086919 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.086973 kubelet[3322]: W0128 01:27:02.086929 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.086973 kubelet[3322]: E0128 01:27:02.086939 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.087397 kubelet[3322]: E0128 01:27:02.087255 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.087397 kubelet[3322]: W0128 01:27:02.087266 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.087397 kubelet[3322]: E0128 01:27:02.087281 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.087651 kubelet[3322]: E0128 01:27:02.087556 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.087651 kubelet[3322]: W0128 01:27:02.087566 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.087651 kubelet[3322]: E0128 01:27:02.087576 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.088094 kubelet[3322]: E0128 01:27:02.087941 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.088094 kubelet[3322]: W0128 01:27:02.087952 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.088094 kubelet[3322]: E0128 01:27:02.087969 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.088981 kubelet[3322]: E0128 01:27:02.088876 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.088981 kubelet[3322]: W0128 01:27:02.088889 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.088981 kubelet[3322]: E0128 01:27:02.088907 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.089322 kubelet[3322]: E0128 01:27:02.089245 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.089322 kubelet[3322]: W0128 01:27:02.089257 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.089322 kubelet[3322]: E0128 01:27:02.089272 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.089842 kubelet[3322]: E0128 01:27:02.089702 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.089842 kubelet[3322]: W0128 01:27:02.089715 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.089842 kubelet[3322]: E0128 01:27:02.089793 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.090183 kubelet[3322]: E0128 01:27:02.090058 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.090183 kubelet[3322]: W0128 01:27:02.090069 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.090183 kubelet[3322]: E0128 01:27:02.090095 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.090523 kubelet[3322]: E0128 01:27:02.090425 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.090523 kubelet[3322]: W0128 01:27:02.090440 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.090523 kubelet[3322]: E0128 01:27:02.090467 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.092232 kubelet[3322]: E0128 01:27:02.092211 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.092913 kubelet[3322]: W0128 01:27:02.092893 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.093005 kubelet[3322]: E0128 01:27:02.092993 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.094400 kubelet[3322]: E0128 01:27:02.094382 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.094495 kubelet[3322]: W0128 01:27:02.094483 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.094572 kubelet[3322]: E0128 01:27:02.094562 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.095657 kubelet[3322]: E0128 01:27:02.095508 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.095657 kubelet[3322]: W0128 01:27:02.095520 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.096324 kubelet[3322]: E0128 01:27:02.096221 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.096324 kubelet[3322]: W0128 01:27:02.096234 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.096324 kubelet[3322]: E0128 01:27:02.096247 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.096614 kubelet[3322]: E0128 01:27:02.096588 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.097329 kubelet[3322]: E0128 01:27:02.097051 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.097670 kubelet[3322]: W0128 01:27:02.097552 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.097670 kubelet[3322]: E0128 01:27:02.097582 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.098299 kubelet[3322]: E0128 01:27:02.098213 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.098299 kubelet[3322]: W0128 01:27:02.098227 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.098614 kubelet[3322]: E0128 01:27:02.098446 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.098996 kubelet[3322]: E0128 01:27:02.098898 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.098996 kubelet[3322]: W0128 01:27:02.098910 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.098996 kubelet[3322]: E0128 01:27:02.098928 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.100962 kubelet[3322]: E0128 01:27:02.100558 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.100962 kubelet[3322]: W0128 01:27:02.100573 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.103207 kubelet[3322]: E0128 01:27:02.103181 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.103503 kubelet[3322]: E0128 01:27:02.103362 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.103503 kubelet[3322]: W0128 01:27:02.103476 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.104372 kubelet[3322]: E0128 01:27:02.103533 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.104449 kubelet[3322]: E0128 01:27:02.104429 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.104489 kubelet[3322]: W0128 01:27:02.104446 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.104489 kubelet[3322]: E0128 01:27:02.104479 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.107238 kubelet[3322]: E0128 01:27:02.107132 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:27:02.107238 kubelet[3322]: W0128 01:27:02.107233 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:27:02.107323 kubelet[3322]: E0128 01:27:02.107248 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:27:02.131503 containerd[1830]: time="2026-01-28T01:27:02.131395303Z" level=info msg="StartContainer for \"7be9e22da9c70c06f837c9e10e4aab239bc85721c6f16f2731b4be3257df4fb3\" returns successfully" Jan 28 01:27:02.882091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7be9e22da9c70c06f837c9e10e4aab239bc85721c6f16f2731b4be3257df4fb3-rootfs.mount: Deactivated successfully. Jan 28 01:27:03.245112 kubelet[3322]: I0128 01:27:03.071534 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57b4bf9746-mt57n" podStartSLOduration=5.299667197 podStartE2EDuration="11.071515015s" podCreationTimestamp="2026-01-28 01:26:52 +0000 UTC" firstStartedPulling="2026-01-28 01:26:55.104967577 +0000 UTC m=+28.328912937" lastFinishedPulling="2026-01-28 01:27:00.876815435 +0000 UTC m=+34.100760755" observedRunningTime="2026-01-28 01:27:01.069609028 +0000 UTC m=+34.293554428" watchObservedRunningTime="2026-01-28 01:27:03.071515015 +0000 UTC m=+36.295460535" Jan 28 01:27:03.313203 containerd[1830]: time="2026-01-28T01:27:03.312926704Z" level=info msg="shim disconnected" id=7be9e22da9c70c06f837c9e10e4aab239bc85721c6f16f2731b4be3257df4fb3 namespace=k8s.io Jan 28 01:27:03.313203 containerd[1830]: time="2026-01-28T01:27:03.312995504Z" level=warning msg="cleaning up after shim disconnected" id=7be9e22da9c70c06f837c9e10e4aab239bc85721c6f16f2731b4be3257df4fb3 namespace=k8s.io Jan 28 01:27:03.313203 containerd[1830]: time="2026-01-28T01:27:03.313004824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:27:03.896092 kubelet[3322]: E0128 01:27:03.896048 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:04.057389 containerd[1830]: time="2026-01-28T01:27:04.057251828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:27:05.896435 kubelet[3322]: E0128 01:27:05.896305 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:06.389196 containerd[1830]: time="2026-01-28T01:27:06.388976644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:06.391357 containerd[1830]: time="2026-01-28T01:27:06.391169321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 28 01:27:06.394166 containerd[1830]: time="2026-01-28T01:27:06.393556517Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:06.397452 containerd[1830]: time="2026-01-28T01:27:06.397406512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:06.398121 containerd[1830]: time="2026-01-28T01:27:06.398090711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.340802723s" Jan 28 01:27:06.398187 containerd[1830]: time="2026-01-28T01:27:06.398122351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 28 01:27:06.401036 containerd[1830]: time="2026-01-28T01:27:06.400994547Z" level=info msg="CreateContainer within sandbox \"a26602e20a5940788824b27212ca169c15d893feb1eacb8289e83528f875cd5c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:27:06.428369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2131009413.mount: Deactivated successfully. Jan 28 01:27:06.445056 containerd[1830]: time="2026-01-28T01:27:06.445014564Z" level=info msg="CreateContainer within sandbox \"a26602e20a5940788824b27212ca169c15d893feb1eacb8289e83528f875cd5c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"09e7e12d1a6ef59ecc7c3d91242ed6b17df73288aa9e3c82504bafb21af7d4af\"" Jan 28 01:27:06.445855 containerd[1830]: time="2026-01-28T01:27:06.445825643Z" level=info msg="StartContainer for \"09e7e12d1a6ef59ecc7c3d91242ed6b17df73288aa9e3c82504bafb21af7d4af\"" Jan 28 01:27:06.501018 containerd[1830]: time="2026-01-28T01:27:06.500975804Z" level=info msg="StartContainer for \"09e7e12d1a6ef59ecc7c3d91242ed6b17df73288aa9e3c82504bafb21af7d4af\" returns successfully" Jan 28 01:27:07.861769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09e7e12d1a6ef59ecc7c3d91242ed6b17df73288aa9e3c82504bafb21af7d4af-rootfs.mount: Deactivated successfully. Jan 28 01:27:07.878985 kubelet[3322]: I0128 01:27:07.878095 3322 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:27:07.898009 containerd[1830]: time="2026-01-28T01:27:07.897971256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cvf4,Uid:0a3f2b82-9dfb-45f4-8480-07421e1f39e6,Namespace:calico-system,Attempt:0,}" Jan 28 01:27:08.026366 kubelet[3322]: I0128 01:27:08.025851 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/765006a4-4116-4fc5-bb5f-ca94978ecdd0-goldmane-key-pair\") pod \"goldmane-666569f655-bsx7f\" (UID: \"765006a4-4116-4fc5-bb5f-ca94978ecdd0\") " pod="calico-system/goldmane-666569f655-bsx7f" Jan 28 01:27:08.026366 kubelet[3322]: I0128 01:27:08.025894 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wfw2\" (UniqueName: \"kubernetes.io/projected/5ad37459-5747-4fbc-9f3f-b00702efbc4d-kube-api-access-8wfw2\") pod \"whisker-5b9c57688c-5cnwf\" (UID: \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\") " pod="calico-system/whisker-5b9c57688c-5cnwf" Jan 28 01:27:08.028811 kubelet[3322]: I0128 01:27:08.025914 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/765006a4-4116-4fc5-bb5f-ca94978ecdd0-config\") pod \"goldmane-666569f655-bsx7f\" (UID: \"765006a4-4116-4fc5-bb5f-ca94978ecdd0\") " pod="calico-system/goldmane-666569f655-bsx7f" Jan 28 01:27:08.028811 kubelet[3322]: I0128 01:27:08.027317 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhcql\" (UniqueName: \"kubernetes.io/projected/4cbfc031-de2b-4325-b7c3-25699b54c64a-kube-api-access-vhcql\") pod \"coredns-668d6bf9bc-48qp6\" (UID: \"4cbfc031-de2b-4325-b7c3-25699b54c64a\") " pod="kube-system/coredns-668d6bf9bc-48qp6" Jan 28 01:27:08.028811 kubelet[3322]: I0128 01:27:08.027335 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ad37459-5747-4fbc-9f3f-b00702efbc4d-whisker-backend-key-pair\") pod \"whisker-5b9c57688c-5cnwf\" (UID: \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\") " pod="calico-system/whisker-5b9c57688c-5cnwf" Jan 28 01:27:08.028811 kubelet[3322]: I0128 01:27:08.027359 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02952824-c0c6-4ff4-9cb6-94e3d48d9ca2-config-volume\") pod \"coredns-668d6bf9bc-hb25f\" (UID: \"02952824-c0c6-4ff4-9cb6-94e3d48d9ca2\") " pod="kube-system/coredns-668d6bf9bc-hb25f" Jan 28 01:27:08.028811 kubelet[3322]: I0128 01:27:08.027377 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxv5\" (UniqueName: \"kubernetes.io/projected/02952824-c0c6-4ff4-9cb6-94e3d48d9ca2-kube-api-access-rbxv5\") pod \"coredns-668d6bf9bc-hb25f\" (UID: \"02952824-c0c6-4ff4-9cb6-94e3d48d9ca2\") " pod="kube-system/coredns-668d6bf9bc-hb25f" Jan 28 01:27:08.029014 kubelet[3322]: I0128 01:27:08.027394 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/059763c9-3c80-4151-8ab7-5e7bceb1fb9d-calico-apiserver-certs\") pod \"calico-apiserver-c96dcf9cd-km68f\" (UID: \"059763c9-3c80-4151-8ab7-5e7bceb1fb9d\") " pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" Jan 28 01:27:08.029014 kubelet[3322]: I0128 01:27:08.027412 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx9wk\" (UniqueName: \"kubernetes.io/projected/059763c9-3c80-4151-8ab7-5e7bceb1fb9d-kube-api-access-nx9wk\") pod \"calico-apiserver-c96dcf9cd-km68f\" (UID: \"059763c9-3c80-4151-8ab7-5e7bceb1fb9d\") " pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" Jan 28 01:27:08.029014 kubelet[3322]: I0128 01:27:08.027427 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mcz2\" (UniqueName: \"kubernetes.io/projected/1e049504-bcc6-4539-aec9-ba0a3a0b4d66-kube-api-access-8mcz2\") pod \"calico-apiserver-c96dcf9cd-wgrwx\" (UID: \"1e049504-bcc6-4539-aec9-ba0a3a0b4d66\") " pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" Jan 28 01:27:08.029014 kubelet[3322]: I0128 01:27:08.027443 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a44d94-a633-4913-abd2-c73f93d95c86-tigera-ca-bundle\") pod \"calico-kube-controllers-79785b9fc9-s4g8w\" (UID: \"c1a44d94-a633-4913-abd2-c73f93d95c86\") " pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" Jan 28 01:27:08.029014 kubelet[3322]: I0128 01:27:08.027461 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbfgh\" (UniqueName: \"kubernetes.io/projected/c1a44d94-a633-4913-abd2-c73f93d95c86-kube-api-access-jbfgh\") pod \"calico-kube-controllers-79785b9fc9-s4g8w\" (UID: \"c1a44d94-a633-4913-abd2-c73f93d95c86\") " pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" Jan 28 01:27:08.029130 kubelet[3322]: I0128 01:27:08.027481 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/765006a4-4116-4fc5-bb5f-ca94978ecdd0-goldmane-ca-bundle\") pod \"goldmane-666569f655-bsx7f\" (UID: \"765006a4-4116-4fc5-bb5f-ca94978ecdd0\") " pod="calico-system/goldmane-666569f655-bsx7f" Jan 28 01:27:08.029130 kubelet[3322]: I0128 01:27:08.027495 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nlqf\" (UniqueName: \"kubernetes.io/projected/765006a4-4116-4fc5-bb5f-ca94978ecdd0-kube-api-access-6nlqf\") pod \"goldmane-666569f655-bsx7f\" (UID: \"765006a4-4116-4fc5-bb5f-ca94978ecdd0\") " pod="calico-system/goldmane-666569f655-bsx7f" Jan 28 01:27:08.029130 kubelet[3322]: I0128 01:27:08.027512 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e049504-bcc6-4539-aec9-ba0a3a0b4d66-calico-apiserver-certs\") pod \"calico-apiserver-c96dcf9cd-wgrwx\" (UID: \"1e049504-bcc6-4539-aec9-ba0a3a0b4d66\") " pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" Jan 28 01:27:08.029130 kubelet[3322]: I0128 01:27:08.027527 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cbfc031-de2b-4325-b7c3-25699b54c64a-config-volume\") pod \"coredns-668d6bf9bc-48qp6\" (UID: \"4cbfc031-de2b-4325-b7c3-25699b54c64a\") " pod="kube-system/coredns-668d6bf9bc-48qp6" Jan 28 01:27:08.029130 kubelet[3322]: I0128 01:27:08.027542 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ad37459-5747-4fbc-9f3f-b00702efbc4d-whisker-ca-bundle\") pod \"whisker-5b9c57688c-5cnwf\" (UID: \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\") " pod="calico-system/whisker-5b9c57688c-5cnwf" Jan 28 01:27:08.699243 kubelet[3322]: I0128 01:27:08.579341 3322 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 01:27:08.774697 containerd[1830]: time="2026-01-28T01:27:08.774623529Z" level=info msg="shim disconnected" id=09e7e12d1a6ef59ecc7c3d91242ed6b17df73288aa9e3c82504bafb21af7d4af namespace=k8s.io Jan 28 01:27:08.774697 containerd[1830]: time="2026-01-28T01:27:08.774694489Z" level=warning msg="cleaning up after shim disconnected" id=09e7e12d1a6ef59ecc7c3d91242ed6b17df73288aa9e3c82504bafb21af7d4af namespace=k8s.io Jan 28 01:27:08.774697 containerd[1830]: time="2026-01-28T01:27:08.774704689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:27:08.834349 containerd[1830]: time="2026-01-28T01:27:08.834287444Z" level=error msg="Failed to destroy network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:08.834814 containerd[1830]: time="2026-01-28T01:27:08.834718323Z" level=error msg="encountered an error cleaning up failed sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:08.834814 containerd[1830]: time="2026-01-28T01:27:08.834791843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cvf4,Uid:0a3f2b82-9dfb-45f4-8480-07421e1f39e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:08.835184 kubelet[3322]: E0128 01:27:08.835029 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:08.835184 kubelet[3322]: E0128 01:27:08.835124 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cvf4" Jan 28 01:27:08.835687 kubelet[3322]: E0128 01:27:08.835298 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cvf4" Jan 28 01:27:08.835687 kubelet[3322]: E0128 01:27:08.835367 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:08.846363 containerd[1830]: time="2026-01-28T01:27:08.846295387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hb25f,Uid:02952824-c0c6-4ff4-9cb6-94e3d48d9ca2,Namespace:kube-system,Attempt:0,}" Jan 28 01:27:08.846518 containerd[1830]: time="2026-01-28T01:27:08.846294787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79785b9fc9-s4g8w,Uid:c1a44d94-a633-4913-abd2-c73f93d95c86,Namespace:calico-system,Attempt:0,}" Jan 28 01:27:08.855574 containerd[1830]: time="2026-01-28T01:27:08.855532134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-48qp6,Uid:4cbfc031-de2b-4325-b7c3-25699b54c64a,Namespace:kube-system,Attempt:0,}" Jan 28 01:27:08.864424 containerd[1830]: time="2026-01-28T01:27:08.864336921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b9c57688c-5cnwf,Uid:5ad37459-5747-4fbc-9f3f-b00702efbc4d,Namespace:calico-system,Attempt:0,}" Jan 28 01:27:08.864424 containerd[1830]: time="2026-01-28T01:27:08.864550401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c96dcf9cd-km68f,Uid:059763c9-3c80-4151-8ab7-5e7bceb1fb9d,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:27:08.869421 containerd[1830]: time="2026-01-28T01:27:08.869289954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c96dcf9cd-wgrwx,Uid:1e049504-bcc6-4539-aec9-ba0a3a0b4d66,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:27:08.869495 containerd[1830]: time="2026-01-28T01:27:08.869336754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bsx7f,Uid:765006a4-4116-4fc5-bb5f-ca94978ecdd0,Namespace:calico-system,Attempt:0,}" Jan 28 01:27:09.005108 containerd[1830]: time="2026-01-28T01:27:09.004435322Z" level=error msg="Failed to destroy network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.005981 containerd[1830]: time="2026-01-28T01:27:09.005343641Z" level=error msg="encountered an error cleaning up failed sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.005981 containerd[1830]: time="2026-01-28T01:27:09.005394441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hb25f,Uid:02952824-c0c6-4ff4-9cb6-94e3d48d9ca2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.006529 kubelet[3322]: E0128 01:27:09.006184 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.006529 kubelet[3322]: E0128 01:27:09.006251 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hb25f" Jan 28 01:27:09.006529 kubelet[3322]: E0128 01:27:09.006271 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hb25f" Jan 28 01:27:09.007102 kubelet[3322]: E0128 01:27:09.006312 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hb25f_kube-system(02952824-c0c6-4ff4-9cb6-94e3d48d9ca2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hb25f_kube-system(02952824-c0c6-4ff4-9cb6-94e3d48d9ca2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hb25f" podUID="02952824-c0c6-4ff4-9cb6-94e3d48d9ca2" Jan 28 01:27:09.067012 kubelet[3322]: I0128 01:27:09.066375 3322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:09.071655 kubelet[3322]: I0128 01:27:09.071627 3322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:09.073019 containerd[1830]: time="2026-01-28T01:27:09.072978904Z" level=error msg="Failed to destroy network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.074412 containerd[1830]: time="2026-01-28T01:27:09.073767303Z" level=info msg="StopPodSandbox for \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\"" Jan 28 01:27:09.074714 containerd[1830]: time="2026-01-28T01:27:09.074688862Z" level=info msg="StopPodSandbox for \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\"" Jan 28 01:27:09.074860 containerd[1830]: time="2026-01-28T01:27:09.074830542Z" level=info msg="Ensure that sandbox f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756 in task-service has been cleanup successfully" Jan 28 01:27:09.075233 containerd[1830]: time="2026-01-28T01:27:09.075190141Z" level=info msg="Ensure that sandbox fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05 in task-service has been cleanup successfully" Jan 28 01:27:09.078089 containerd[1830]: time="2026-01-28T01:27:09.077891017Z" level=error msg="encountered an error cleaning up failed sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.078089 containerd[1830]: time="2026-01-28T01:27:09.077946977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79785b9fc9-s4g8w,Uid:c1a44d94-a633-4913-abd2-c73f93d95c86,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.078236 kubelet[3322]: E0128 01:27:09.078114 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.078236 kubelet[3322]: E0128 01:27:09.078197 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" Jan 28 01:27:09.078236 kubelet[3322]: E0128 01:27:09.078229 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" Jan 28 01:27:09.078320 kubelet[3322]: E0128 01:27:09.078265 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79785b9fc9-s4g8w_calico-system(c1a44d94-a633-4913-abd2-c73f93d95c86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79785b9fc9-s4g8w_calico-system(c1a44d94-a633-4913-abd2-c73f93d95c86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:27:09.090167 containerd[1830]: time="2026-01-28T01:27:09.089690801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:27:09.135571 containerd[1830]: time="2026-01-28T01:27:09.132138740Z" level=error msg="Failed to destroy network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.136445 containerd[1830]: time="2026-01-28T01:27:09.136405414Z" level=error msg="encountered an error cleaning up failed sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.137010 containerd[1830]: time="2026-01-28T01:27:09.136640494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-48qp6,Uid:4cbfc031-de2b-4325-b7c3-25699b54c64a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.138322 kubelet[3322]: E0128 01:27:09.137108 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.138322 kubelet[3322]: E0128 01:27:09.137176 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-48qp6" Jan 28 01:27:09.138322 kubelet[3322]: E0128 01:27:09.137196 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-48qp6" Jan 28 01:27:09.138447 kubelet[3322]: E0128 01:27:09.137239 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-48qp6_kube-system(4cbfc031-de2b-4325-b7c3-25699b54c64a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-48qp6_kube-system(4cbfc031-de2b-4325-b7c3-25699b54c64a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-48qp6" podUID="4cbfc031-de2b-4325-b7c3-25699b54c64a" Jan 28 01:27:09.183269 containerd[1830]: time="2026-01-28T01:27:09.183199988Z" level=error msg="Failed to destroy network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.183682 containerd[1830]: time="2026-01-28T01:27:09.183587427Z" level=error msg="Failed to destroy network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.184248 containerd[1830]: time="2026-01-28T01:27:09.183928227Z" level=error msg="encountered an error cleaning up failed sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.184248 containerd[1830]: time="2026-01-28T01:27:09.183976386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c96dcf9cd-km68f,Uid:059763c9-3c80-4151-8ab7-5e7bceb1fb9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.184467 containerd[1830]: time="2026-01-28T01:27:09.184440466Z" level=error msg="encountered an error cleaning up failed sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.184570 containerd[1830]: time="2026-01-28T01:27:09.184546906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bsx7f,Uid:765006a4-4116-4fc5-bb5f-ca94978ecdd0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.185292 kubelet[3322]: E0128 01:27:09.184820 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.185292 kubelet[3322]: E0128 01:27:09.184883 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bsx7f" Jan 28 01:27:09.185292 kubelet[3322]: E0128 01:27:09.184913 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bsx7f" Jan 28 01:27:09.185428 kubelet[3322]: E0128 01:27:09.184949 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bsx7f_calico-system(765006a4-4116-4fc5-bb5f-ca94978ecdd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bsx7f_calico-system(765006a4-4116-4fc5-bb5f-ca94978ecdd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:27:09.185428 kubelet[3322]: E0128 01:27:09.184820 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.185428 kubelet[3322]: E0128 01:27:09.185187 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" Jan 28 01:27:09.185567 kubelet[3322]: E0128 01:27:09.185202 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" Jan 28 01:27:09.185567 kubelet[3322]: E0128 01:27:09.185225 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c96dcf9cd-km68f_calico-apiserver(059763c9-3c80-4151-8ab7-5e7bceb1fb9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c96dcf9cd-km68f_calico-apiserver(059763c9-3c80-4151-8ab7-5e7bceb1fb9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:27:09.191756 containerd[1830]: time="2026-01-28T01:27:09.191715695Z" level=error msg="StopPodSandbox for \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\" failed" error="failed to destroy network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.192033 containerd[1830]: time="2026-01-28T01:27:09.191904015Z" level=error msg="StopPodSandbox for \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\" failed" error="failed to destroy network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.192394 kubelet[3322]: E0128 01:27:09.192358 3322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:09.192487 kubelet[3322]: E0128 01:27:09.192443 3322 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756"} Jan 28 01:27:09.192530 kubelet[3322]: E0128 01:27:09.192493 3322 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02952824-c0c6-4ff4-9cb6-94e3d48d9ca2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:27:09.192530 kubelet[3322]: E0128 01:27:09.192518 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02952824-c0c6-4ff4-9cb6-94e3d48d9ca2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hb25f" podUID="02952824-c0c6-4ff4-9cb6-94e3d48d9ca2" Jan 28 01:27:09.192650 kubelet[3322]: E0128 01:27:09.192358 3322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:09.192650 kubelet[3322]: E0128 01:27:09.192542 3322 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05"} Jan 28 01:27:09.192650 kubelet[3322]: E0128 01:27:09.192559 3322 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a3f2b82-9dfb-45f4-8480-07421e1f39e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:27:09.192650 kubelet[3322]: E0128 01:27:09.192574 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a3f2b82-9dfb-45f4-8480-07421e1f39e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:09.194010 containerd[1830]: time="2026-01-28T01:27:09.193972132Z" level=error msg="Failed to destroy network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.195415 containerd[1830]: time="2026-01-28T01:27:09.195377850Z" level=error msg="encountered an error cleaning up failed sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.195595 containerd[1830]: time="2026-01-28T01:27:09.195429370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b9c57688c-5cnwf,Uid:5ad37459-5747-4fbc-9f3f-b00702efbc4d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.195822 kubelet[3322]: E0128 01:27:09.195686 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.195822 kubelet[3322]: E0128 01:27:09.195733 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b9c57688c-5cnwf" Jan 28 01:27:09.195822 kubelet[3322]: E0128 01:27:09.195765 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b9c57688c-5cnwf" Jan 28 01:27:09.195926 kubelet[3322]: E0128 01:27:09.195796 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b9c57688c-5cnwf_calico-system(5ad37459-5747-4fbc-9f3f-b00702efbc4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b9c57688c-5cnwf_calico-system(5ad37459-5747-4fbc-9f3f-b00702efbc4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b9c57688c-5cnwf" podUID="5ad37459-5747-4fbc-9f3f-b00702efbc4d" Jan 28 01:27:09.201631 containerd[1830]: time="2026-01-28T01:27:09.201590761Z" level=error msg="Failed to destroy network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.201891 containerd[1830]: time="2026-01-28T01:27:09.201865881Z" level=error msg="encountered an error cleaning up failed sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.201940 containerd[1830]: time="2026-01-28T01:27:09.201908361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c96dcf9cd-wgrwx,Uid:1e049504-bcc6-4539-aec9-ba0a3a0b4d66,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.202125 kubelet[3322]: E0128 01:27:09.202093 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:09.202223 kubelet[3322]: E0128 01:27:09.202142 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" Jan 28 01:27:09.202223 kubelet[3322]: E0128 01:27:09.202182 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" Jan 28 01:27:09.202311 kubelet[3322]: E0128 01:27:09.202219 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c96dcf9cd-wgrwx_calico-apiserver(1e049504-bcc6-4539-aec9-ba0a3a0b4d66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c96dcf9cd-wgrwx_calico-apiserver(1e049504-bcc6-4539-aec9-ba0a3a0b4d66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:27:09.859301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844-shm.mount: Deactivated successfully. Jan 28 01:27:09.859466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f-shm.mount: Deactivated successfully. Jan 28 01:27:09.859548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756-shm.mount: Deactivated successfully. Jan 28 01:27:10.088726 kubelet[3322]: I0128 01:27:10.087908 3322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:27:10.089060 containerd[1830]: time="2026-01-28T01:27:10.088584739Z" level=info msg="StopPodSandbox for \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\"" Jan 28 01:27:10.090875 containerd[1830]: time="2026-01-28T01:27:10.089444858Z" level=info msg="Ensure that sandbox 12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22 in task-service has been cleanup successfully" Jan 28 01:27:10.090875 containerd[1830]: time="2026-01-28T01:27:10.090202697Z" level=info msg="StopPodSandbox for \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\"" Jan 28 01:27:10.090875 containerd[1830]: time="2026-01-28T01:27:10.090381097Z" level=info msg="Ensure that sandbox 0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844 in task-service has been cleanup successfully" Jan 28 01:27:10.090947 kubelet[3322]: I0128 01:27:10.089778 3322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:10.093092 kubelet[3322]: I0128 01:27:10.093072 3322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:27:10.093718 containerd[1830]: time="2026-01-28T01:27:10.093695012Z" level=info msg="StopPodSandbox for \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\"" Jan 28 01:27:10.094118 containerd[1830]: time="2026-01-28T01:27:10.094006691Z" level=info msg="Ensure that sandbox e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e in task-service has been cleanup successfully" Jan 28 01:27:10.096670 kubelet[3322]: I0128 01:27:10.096647 3322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:10.097078 containerd[1830]: time="2026-01-28T01:27:10.097054927Z" level=info msg="StopPodSandbox for \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\"" Jan 28 01:27:10.097295 containerd[1830]: time="2026-01-28T01:27:10.097268807Z" level=info msg="Ensure that sandbox fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59 in task-service has been cleanup successfully" Jan 28 01:27:10.102317 kubelet[3322]: I0128 01:27:10.102237 3322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:27:10.104598 containerd[1830]: time="2026-01-28T01:27:10.102705319Z" level=info msg="StopPodSandbox for \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\"" Jan 28 01:27:10.104835 containerd[1830]: time="2026-01-28T01:27:10.104728156Z" level=info msg="Ensure that sandbox f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f in task-service has been cleanup successfully" Jan 28 01:27:10.112014 kubelet[3322]: I0128 01:27:10.111927 3322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:10.114100 containerd[1830]: time="2026-01-28T01:27:10.113966463Z" level=info msg="StopPodSandbox for \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\"" Jan 28 01:27:10.114982 containerd[1830]: time="2026-01-28T01:27:10.114726342Z" level=info msg="Ensure that sandbox 815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1 in task-service has been cleanup successfully" Jan 28 01:27:10.166636 containerd[1830]: time="2026-01-28T01:27:10.166553988Z" level=error msg="StopPodSandbox for \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\" failed" error="failed to destroy network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:10.166933 kubelet[3322]: E0128 01:27:10.166794 3322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:27:10.166933 kubelet[3322]: E0128 01:27:10.166840 3322 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e"} Jan 28 01:27:10.166933 kubelet[3322]: E0128 01:27:10.166877 3322 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"765006a4-4116-4fc5-bb5f-ca94978ecdd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:27:10.166933 kubelet[3322]: E0128 01:27:10.166897 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"765006a4-4116-4fc5-bb5f-ca94978ecdd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:27:10.200061 containerd[1830]: time="2026-01-28T01:27:10.199688261Z" level=error msg="StopPodSandbox for \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\" failed" error="failed to destroy network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:10.200193 kubelet[3322]: E0128 01:27:10.199924 3322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:27:10.200193 kubelet[3322]: E0128 01:27:10.199972 3322 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22"} Jan 28 01:27:10.200193 kubelet[3322]: E0128 01:27:10.200003 3322 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"059763c9-3c80-4151-8ab7-5e7bceb1fb9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:27:10.200193 kubelet[3322]: E0128 01:27:10.200025 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"059763c9-3c80-4151-8ab7-5e7bceb1fb9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:27:10.204175 containerd[1830]: time="2026-01-28T01:27:10.203813535Z" level=error msg="StopPodSandbox for \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\" failed" error="failed to destroy network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:10.204471 kubelet[3322]: E0128 01:27:10.204432 3322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:10.204529 kubelet[3322]: E0128 01:27:10.204479 3322 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844"} Jan 28 01:27:10.204529 kubelet[3322]: E0128 01:27:10.204517 3322 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4cbfc031-de2b-4325-b7c3-25699b54c64a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:27:10.204606 kubelet[3322]: E0128 01:27:10.204535 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4cbfc031-de2b-4325-b7c3-25699b54c64a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-48qp6" podUID="4cbfc031-de2b-4325-b7c3-25699b54c64a" Jan 28 01:27:10.208288 containerd[1830]: time="2026-01-28T01:27:10.208211849Z" level=error msg="StopPodSandbox for \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\" failed" error="failed to destroy network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:10.208445 kubelet[3322]: E0128 01:27:10.208411 3322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:10.208483 kubelet[3322]: E0128 01:27:10.208456 3322 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1"} Jan 28 01:27:10.208507 kubelet[3322]: E0128 01:27:10.208481 3322 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e049504-bcc6-4539-aec9-ba0a3a0b4d66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:27:10.208507 kubelet[3322]: E0128 01:27:10.208498 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e049504-bcc6-4539-aec9-ba0a3a0b4d66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:27:10.211309 containerd[1830]: time="2026-01-28T01:27:10.211022885Z" level=error msg="StopPodSandbox for \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\" failed" error="failed to destroy network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:10.211378 kubelet[3322]: E0128 01:27:10.211199 3322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:10.211378 kubelet[3322]: E0128 01:27:10.211234 3322 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59"} Jan 28 01:27:10.211378 kubelet[3322]: E0128 01:27:10.211265 3322 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:27:10.211378 kubelet[3322]: E0128 01:27:10.211286 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b9c57688c-5cnwf" podUID="5ad37459-5747-4fbc-9f3f-b00702efbc4d" Jan 28 01:27:10.213047 containerd[1830]: time="2026-01-28T01:27:10.212949962Z" level=error msg="StopPodSandbox for \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\" failed" error="failed to destroy network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:27:10.213205 kubelet[3322]: E0128 01:27:10.213122 3322 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:27:10.213251 kubelet[3322]: E0128 01:27:10.213210 3322 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f"} Jan 28 01:27:10.213251 kubelet[3322]: E0128 01:27:10.213236 3322 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1a44d94-a633-4913-abd2-c73f93d95c86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:27:10.213324 kubelet[3322]: E0128 01:27:10.213253 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1a44d94-a633-4913-abd2-c73f93d95c86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:27:13.330717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568054304.mount: Deactivated successfully. Jan 28 01:27:13.571793 containerd[1830]: time="2026-01-28T01:27:13.571739902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:13.573771 containerd[1830]: time="2026-01-28T01:27:13.573742139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 28 01:27:13.577064 containerd[1830]: time="2026-01-28T01:27:13.577036054Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:13.586913 containerd[1830]: time="2026-01-28T01:27:13.586858600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:27:13.587877 containerd[1830]: time="2026-01-28T01:27:13.587463559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.497466119s" Jan 28 01:27:13.587877 containerd[1830]: time="2026-01-28T01:27:13.587495559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 28 01:27:13.602417 containerd[1830]: time="2026-01-28T01:27:13.602381538Z" level=info msg="CreateContainer within sandbox \"a26602e20a5940788824b27212ca169c15d893feb1eacb8289e83528f875cd5c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:27:13.670417 containerd[1830]: time="2026-01-28T01:27:13.670372641Z" level=info msg="CreateContainer within sandbox \"a26602e20a5940788824b27212ca169c15d893feb1eacb8289e83528f875cd5c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c63d9f7e1d7c58566b150483d7b3032abfd19bf1015dfa42ef8820924326c58d\"" Jan 28 01:27:13.673711 containerd[1830]: time="2026-01-28T01:27:13.672172798Z" level=info msg="StartContainer for \"c63d9f7e1d7c58566b150483d7b3032abfd19bf1015dfa42ef8820924326c58d\"" Jan 28 01:27:13.727118 containerd[1830]: time="2026-01-28T01:27:13.727078440Z" level=info msg="StartContainer for \"c63d9f7e1d7c58566b150483d7b3032abfd19bf1015dfa42ef8820924326c58d\" returns successfully" Jan 28 01:27:14.075754 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:27:14.075878 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:27:14.156903 kubelet[3322]: I0128 01:27:14.156841 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ds9m9" podStartSLOduration=4.077591917 podStartE2EDuration="22.156827068s" podCreationTimestamp="2026-01-28 01:26:52 +0000 UTC" firstStartedPulling="2026-01-28 01:26:55.509079567 +0000 UTC m=+28.733024927" lastFinishedPulling="2026-01-28 01:27:13.588314718 +0000 UTC m=+46.812260078" observedRunningTime="2026-01-28 01:27:14.15547947 +0000 UTC m=+47.379424870" watchObservedRunningTime="2026-01-28 01:27:14.156827068 +0000 UTC m=+47.380772428" Jan 28 01:27:14.215577 containerd[1830]: time="2026-01-28T01:27:14.214843106Z" level=info msg="StopPodSandbox for \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\"" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.291 [INFO][4497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.291 [INFO][4497] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" iface="eth0" netns="/var/run/netns/cni-7e922a7f-df21-708a-c60c-565424e15e47" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.292 [INFO][4497] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" iface="eth0" netns="/var/run/netns/cni-7e922a7f-df21-708a-c60c-565424e15e47" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.292 [INFO][4497] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" iface="eth0" netns="/var/run/netns/cni-7e922a7f-df21-708a-c60c-565424e15e47" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.292 [INFO][4497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.292 [INFO][4497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.316 [INFO][4509] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" HandleID="k8s-pod-network.fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.316 [INFO][4509] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.317 [INFO][4509] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.332 [WARNING][4509] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" HandleID="k8s-pod-network.fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.334 [INFO][4509] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" HandleID="k8s-pod-network.fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.336 [INFO][4509] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:14.346762 containerd[1830]: 2026-01-28 01:27:14.343 [INFO][4497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:14.350726 containerd[1830]: time="2026-01-28T01:27:14.348877155Z" level=info msg="TearDown network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\" successfully" Jan 28 01:27:14.350726 containerd[1830]: time="2026-01-28T01:27:14.348907155Z" level=info msg="StopPodSandbox for \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\" returns successfully" Jan 28 01:27:14.354443 systemd[1]: run-netns-cni\x2d7e922a7f\x2ddf21\x2d708a\x2dc60c\x2d565424e15e47.mount: Deactivated successfully. Jan 28 01:27:14.471043 kubelet[3322]: I0128 01:27:14.470021 3322 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ad37459-5747-4fbc-9f3f-b00702efbc4d-whisker-ca-bundle\") pod \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\" (UID: \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\") " Jan 28 01:27:14.471043 kubelet[3322]: I0128 01:27:14.470079 3322 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ad37459-5747-4fbc-9f3f-b00702efbc4d-whisker-backend-key-pair\") pod \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\" (UID: \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\") " Jan 28 01:27:14.471043 kubelet[3322]: I0128 01:27:14.470103 3322 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wfw2\" (UniqueName: \"kubernetes.io/projected/5ad37459-5747-4fbc-9f3f-b00702efbc4d-kube-api-access-8wfw2\") pod \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\" (UID: \"5ad37459-5747-4fbc-9f3f-b00702efbc4d\") " Jan 28 01:27:14.471679 kubelet[3322]: I0128 01:27:14.471654 3322 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad37459-5747-4fbc-9f3f-b00702efbc4d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5ad37459-5747-4fbc-9f3f-b00702efbc4d" (UID: "5ad37459-5747-4fbc-9f3f-b00702efbc4d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:27:14.486133 systemd[1]: var-lib-kubelet-pods-5ad37459\x2d5747\x2d4fbc\x2d9f3f\x2db00702efbc4d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:27:14.487037 kubelet[3322]: I0128 01:27:14.486903 3322 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad37459-5747-4fbc-9f3f-b00702efbc4d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5ad37459-5747-4fbc-9f3f-b00702efbc4d" (UID: "5ad37459-5747-4fbc-9f3f-b00702efbc4d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:27:14.487216 kubelet[3322]: I0128 01:27:14.487197 3322 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad37459-5747-4fbc-9f3f-b00702efbc4d-kube-api-access-8wfw2" (OuterVolumeSpecName: "kube-api-access-8wfw2") pod "5ad37459-5747-4fbc-9f3f-b00702efbc4d" (UID: "5ad37459-5747-4fbc-9f3f-b00702efbc4d"). InnerVolumeSpecName "kube-api-access-8wfw2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:27:14.487461 systemd[1]: var-lib-kubelet-pods-5ad37459\x2d5747\x2d4fbc\x2d9f3f\x2db00702efbc4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8wfw2.mount: Deactivated successfully. Jan 28 01:27:14.570794 kubelet[3322]: I0128 01:27:14.570753 3322 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ad37459-5747-4fbc-9f3f-b00702efbc4d-whisker-ca-bundle\") on node \"ci-4081.3.6-n-11aaf12d54\" DevicePath \"\"" Jan 28 01:27:14.570794 kubelet[3322]: I0128 01:27:14.570791 3322 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ad37459-5747-4fbc-9f3f-b00702efbc4d-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-11aaf12d54\" DevicePath \"\"" Jan 28 01:27:14.570794 kubelet[3322]: I0128 01:27:14.570803 3322 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8wfw2\" (UniqueName: \"kubernetes.io/projected/5ad37459-5747-4fbc-9f3f-b00702efbc4d-kube-api-access-8wfw2\") on node \"ci-4081.3.6-n-11aaf12d54\" DevicePath \"\"" Jan 28 01:27:15.276110 kubelet[3322]: I0128 01:27:15.276076 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc274\" (UniqueName: \"kubernetes.io/projected/1c968694-35fa-404a-bc2c-7a251b92bedd-kube-api-access-sc274\") pod \"whisker-54666d9c5f-8kw6q\" (UID: \"1c968694-35fa-404a-bc2c-7a251b92bedd\") " pod="calico-system/whisker-54666d9c5f-8kw6q" Jan 28 01:27:15.276110 kubelet[3322]: I0128 01:27:15.276119 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c968694-35fa-404a-bc2c-7a251b92bedd-whisker-backend-key-pair\") pod \"whisker-54666d9c5f-8kw6q\" (UID: \"1c968694-35fa-404a-bc2c-7a251b92bedd\") " pod="calico-system/whisker-54666d9c5f-8kw6q" Jan 28 01:27:15.276110 kubelet[3322]: I0128 01:27:15.276138 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c968694-35fa-404a-bc2c-7a251b92bedd-whisker-ca-bundle\") pod \"whisker-54666d9c5f-8kw6q\" (UID: \"1c968694-35fa-404a-bc2c-7a251b92bedd\") " pod="calico-system/whisker-54666d9c5f-8kw6q" Jan 28 01:27:15.548646 containerd[1830]: time="2026-01-28T01:27:15.548546406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54666d9c5f-8kw6q,Uid:1c968694-35fa-404a-bc2c-7a251b92bedd,Namespace:calico-system,Attempt:0,}" Jan 28 01:27:15.824126 systemd-networkd[1398]: calib4e3de92e1e: Link UP Jan 28 01:27:15.824354 systemd-networkd[1398]: calib4e3de92e1e: Gained carrier Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.630 [INFO][4602] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.655 [INFO][4602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0 whisker-54666d9c5f- calico-system 1c968694-35fa-404a-bc2c-7a251b92bedd 935 0 2026-01-28 01:27:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54666d9c5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-11aaf12d54 whisker-54666d9c5f-8kw6q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib4e3de92e1e [] [] }} ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Namespace="calico-system" Pod="whisker-54666d9c5f-8kw6q" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.655 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Namespace="calico-system" Pod="whisker-54666d9c5f-8kw6q" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.728 [INFO][4651] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" HandleID="k8s-pod-network.b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.730 [INFO][4651] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" HandleID="k8s-pod-network.b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-11aaf12d54", "pod":"whisker-54666d9c5f-8kw6q", "timestamp":"2026-01-28 01:27:15.72835959 +0000 UTC"}, Hostname:"ci-4081.3.6-n-11aaf12d54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.730 [INFO][4651] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.730 [INFO][4651] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.730 [INFO][4651] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-11aaf12d54' Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.743 [INFO][4651] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.752 [INFO][4651] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.758 [INFO][4651] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.761 [INFO][4651] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.763 [INFO][4651] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.763 [INFO][4651] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.764 [INFO][4651] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85 Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.770 [INFO][4651] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.784 [INFO][4651] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.129/26] block=192.168.96.128/26 handle="k8s-pod-network.b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.784 [INFO][4651] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.129/26] handle="k8s-pod-network.b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.785 [INFO][4651] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:15.851333 containerd[1830]: 2026-01-28 01:27:15.785 [INFO][4651] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.129/26] IPv6=[] ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" HandleID="k8s-pod-network.b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" Jan 28 01:27:15.851924 containerd[1830]: 2026-01-28 01:27:15.789 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Namespace="calico-system" Pod="whisker-54666d9c5f-8kw6q" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0", GenerateName:"whisker-54666d9c5f-", Namespace:"calico-system", SelfLink:"", UID:"1c968694-35fa-404a-bc2c-7a251b92bedd", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 27, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54666d9c5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"", Pod:"whisker-54666d9c5f-8kw6q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib4e3de92e1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:15.851924 containerd[1830]: 2026-01-28 01:27:15.794 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.129/32] ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Namespace="calico-system" Pod="whisker-54666d9c5f-8kw6q" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" Jan 28 01:27:15.851924 containerd[1830]: 2026-01-28 01:27:15.794 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4e3de92e1e ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Namespace="calico-system" Pod="whisker-54666d9c5f-8kw6q" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" Jan 28 01:27:15.851924 containerd[1830]: 2026-01-28 01:27:15.821 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Namespace="calico-system" Pod="whisker-54666d9c5f-8kw6q" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" Jan 28 01:27:15.851924 containerd[1830]: 2026-01-28 01:27:15.826 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Namespace="calico-system" Pod="whisker-54666d9c5f-8kw6q" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0", GenerateName:"whisker-54666d9c5f-", Namespace:"calico-system", SelfLink:"", UID:"1c968694-35fa-404a-bc2c-7a251b92bedd", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 27, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54666d9c5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85", Pod:"whisker-54666d9c5f-8kw6q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib4e3de92e1e", MAC:"6a:4c:ad:33:6f:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:15.851924 containerd[1830]: 2026-01-28 01:27:15.845 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85" Namespace="calico-system" Pod="whisker-54666d9c5f-8kw6q" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--54666d9c5f--8kw6q-eth0" Jan 28 01:27:15.922755 containerd[1830]: time="2026-01-28T01:27:15.921976834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:27:15.923320 containerd[1830]: time="2026-01-28T01:27:15.923269952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:27:15.923320 containerd[1830]: time="2026-01-28T01:27:15.923294432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:15.923581 containerd[1830]: time="2026-01-28T01:27:15.923500272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:16.008225 kernel: bpftool[4737]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 01:27:16.011255 containerd[1830]: time="2026-01-28T01:27:16.010526988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54666d9c5f-8kw6q,Uid:1c968694-35fa-404a-bc2c-7a251b92bedd,Namespace:calico-system,Attempt:0,} returns sandbox id \"b48c99bf03704fe83a73228f1539ff1f51edf49916e64f14ae2bbfc0cb9ffe85\"" Jan 28 01:27:16.015668 containerd[1830]: time="2026-01-28T01:27:16.015382981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:27:16.244758 systemd-networkd[1398]: vxlan.calico: Link UP Jan 28 01:27:16.244768 systemd-networkd[1398]: vxlan.calico: Gained carrier Jan 28 01:27:16.313266 containerd[1830]: time="2026-01-28T01:27:16.313221237Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:16.315857 containerd[1830]: time="2026-01-28T01:27:16.315821273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:27:16.315938 containerd[1830]: time="2026-01-28T01:27:16.315920793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:27:16.316130 kubelet[3322]: E0128 01:27:16.316092 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:16.316442 kubelet[3322]: E0128 01:27:16.316151 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:16.323946 kubelet[3322]: E0128 01:27:16.323890 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a9c0213a57f46a6a663e5576055455b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc274,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54666d9c5f-8kw6q_calico-system(1c968694-35fa-404a-bc2c-7a251b92bedd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:16.326790 containerd[1830]: time="2026-01-28T01:27:16.326758697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:27:16.597176 containerd[1830]: time="2026-01-28T01:27:16.597105592Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:16.600311 containerd[1830]: time="2026-01-28T01:27:16.600264628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:27:16.600399 containerd[1830]: time="2026-01-28T01:27:16.600381348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:27:16.600781 kubelet[3322]: E0128 01:27:16.600537 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:16.600781 kubelet[3322]: E0128 01:27:16.600585 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:16.600877 kubelet[3322]: E0128 01:27:16.600689 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc274,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54666d9c5f-8kw6q_calico-system(1c968694-35fa-404a-bc2c-7a251b92bedd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:16.602244 kubelet[3322]: E0128 01:27:16.601835 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54666d9c5f-8kw6q" podUID="1c968694-35fa-404a-bc2c-7a251b92bedd" Jan 28 01:27:16.897808 kubelet[3322]: I0128 01:27:16.897587 3322 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad37459-5747-4fbc-9f3f-b00702efbc4d" path="/var/lib/kubelet/pods/5ad37459-5747-4fbc-9f3f-b00702efbc4d/volumes" Jan 28 01:27:16.900285 systemd-networkd[1398]: calib4e3de92e1e: Gained IPv6LL Jan 28 01:27:17.141369 kubelet[3322]: E0128 01:27:17.141326 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54666d9c5f-8kw6q" podUID="1c968694-35fa-404a-bc2c-7a251b92bedd" Jan 28 01:27:18.116276 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Jan 28 01:27:19.898260 containerd[1830]: time="2026-01-28T01:27:19.897893331Z" level=info msg="StopPodSandbox for \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\"" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.943 [INFO][4828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.943 [INFO][4828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" iface="eth0" netns="/var/run/netns/cni-f3492b1d-cead-127b-61b7-916b4f90277d" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.944 [INFO][4828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" iface="eth0" netns="/var/run/netns/cni-f3492b1d-cead-127b-61b7-916b4f90277d" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.944 [INFO][4828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" iface="eth0" netns="/var/run/netns/cni-f3492b1d-cead-127b-61b7-916b4f90277d" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.944 [INFO][4828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.944 [INFO][4828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.966 [INFO][4835] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" HandleID="k8s-pod-network.f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.966 [INFO][4835] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.966 [INFO][4835] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.974 [WARNING][4835] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" HandleID="k8s-pod-network.f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.975 [INFO][4835] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" HandleID="k8s-pod-network.f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.976 [INFO][4835] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:19.980686 containerd[1830]: 2026-01-28 01:27:19.977 [INFO][4828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:19.981547 containerd[1830]: time="2026-01-28T01:27:19.981088732Z" level=info msg="TearDown network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\" successfully" Jan 28 01:27:19.981547 containerd[1830]: time="2026-01-28T01:27:19.981120772Z" level=info msg="StopPodSandbox for \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\" returns successfully" Jan 28 01:27:19.981759 containerd[1830]: time="2026-01-28T01:27:19.981730491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hb25f,Uid:02952824-c0c6-4ff4-9cb6-94e3d48d9ca2,Namespace:kube-system,Attempt:1,}" Jan 28 01:27:19.982781 systemd[1]: run-netns-cni\x2df3492b1d\x2dcead\x2d127b\x2d61b7\x2d916b4f90277d.mount: Deactivated successfully. Jan 28 01:27:20.109193 systemd-networkd[1398]: cali0fcefe358de: Link UP Jan 28 01:27:20.109446 systemd-networkd[1398]: cali0fcefe358de: Gained carrier Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.040 [INFO][4841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0 coredns-668d6bf9bc- kube-system 02952824-c0c6-4ff4-9cb6-94e3d48d9ca2 970 0 2026-01-28 01:26:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-11aaf12d54 coredns-668d6bf9bc-hb25f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0fcefe358de [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Namespace="kube-system" Pod="coredns-668d6bf9bc-hb25f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.040 [INFO][4841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Namespace="kube-system" Pod="coredns-668d6bf9bc-hb25f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.063 [INFO][4853] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" HandleID="k8s-pod-network.8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.064 [INFO][4853] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" HandleID="k8s-pod-network.8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-11aaf12d54", "pod":"coredns-668d6bf9bc-hb25f", "timestamp":"2026-01-28 01:27:20.063968334 +0000 UTC"}, Hostname:"ci-4081.3.6-n-11aaf12d54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.064 [INFO][4853] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.064 [INFO][4853] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.064 [INFO][4853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-11aaf12d54' Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.074 [INFO][4853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.077 [INFO][4853] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.080 [INFO][4853] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.082 [INFO][4853] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.086 [INFO][4853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.086 [INFO][4853] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.087 [INFO][4853] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501 Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.092 [INFO][4853] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.102 [INFO][4853] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.130/26] block=192.168.96.128/26 handle="k8s-pod-network.8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.102 [INFO][4853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.130/26] handle="k8s-pod-network.8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.102 [INFO][4853] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:20.127680 containerd[1830]: 2026-01-28 01:27:20.102 [INFO][4853] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.130/26] IPv6=[] ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" HandleID="k8s-pod-network.8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:20.130928 containerd[1830]: 2026-01-28 01:27:20.104 [INFO][4841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Namespace="kube-system" Pod="coredns-668d6bf9bc-hb25f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02952824-c0c6-4ff4-9cb6-94e3d48d9ca2", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"", Pod:"coredns-668d6bf9bc-hb25f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fcefe358de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:20.130928 containerd[1830]: 2026-01-28 01:27:20.105 [INFO][4841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.130/32] ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Namespace="kube-system" Pod="coredns-668d6bf9bc-hb25f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:20.130928 containerd[1830]: 2026-01-28 01:27:20.105 [INFO][4841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fcefe358de ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Namespace="kube-system" Pod="coredns-668d6bf9bc-hb25f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:20.130928 containerd[1830]: 2026-01-28 01:27:20.107 [INFO][4841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Namespace="kube-system" Pod="coredns-668d6bf9bc-hb25f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:20.130928 containerd[1830]: 2026-01-28 01:27:20.108 [INFO][4841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Namespace="kube-system" Pod="coredns-668d6bf9bc-hb25f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02952824-c0c6-4ff4-9cb6-94e3d48d9ca2", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501", Pod:"coredns-668d6bf9bc-hb25f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fcefe358de", MAC:"be:2e:8a:ac:87:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:20.130928 containerd[1830]: 2026-01-28 01:27:20.125 [INFO][4841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501" Namespace="kube-system" Pod="coredns-668d6bf9bc-hb25f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:20.155987 containerd[1830]: time="2026-01-28T01:27:20.155739844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:27:20.159059 containerd[1830]: time="2026-01-28T01:27:20.156264563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:27:20.159059 containerd[1830]: time="2026-01-28T01:27:20.156325803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:20.159059 containerd[1830]: time="2026-01-28T01:27:20.156670082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:20.209811 containerd[1830]: time="2026-01-28T01:27:20.209499807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hb25f,Uid:02952824-c0c6-4ff4-9cb6-94e3d48d9ca2,Namespace:kube-system,Attempt:1,} returns sandbox id \"8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501\"" Jan 28 01:27:20.213942 containerd[1830]: time="2026-01-28T01:27:20.213533081Z" level=info msg="CreateContainer within sandbox \"8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:27:20.242822 containerd[1830]: time="2026-01-28T01:27:20.242784160Z" level=info msg="CreateContainer within sandbox \"8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4945b655cf81f3f348646d5b137f52a00be3cda53c55e41ced6ccd6c343ff81c\"" Jan 28 01:27:20.244181 containerd[1830]: time="2026-01-28T01:27:20.244140198Z" level=info msg="StartContainer for \"4945b655cf81f3f348646d5b137f52a00be3cda53c55e41ced6ccd6c343ff81c\"" Jan 28 01:27:20.290527 containerd[1830]: time="2026-01-28T01:27:20.290251492Z" level=info msg="StartContainer for \"4945b655cf81f3f348646d5b137f52a00be3cda53c55e41ced6ccd6c343ff81c\" returns successfully" Jan 28 01:27:20.896794 containerd[1830]: time="2026-01-28T01:27:20.896646499Z" level=info msg="StopPodSandbox for \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\"" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.941 [INFO][4957] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.941 [INFO][4957] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" iface="eth0" netns="/var/run/netns/cni-4036265c-1150-6f63-581d-4ec27b4bda90" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.943 [INFO][4957] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" iface="eth0" netns="/var/run/netns/cni-4036265c-1150-6f63-581d-4ec27b4bda90" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.943 [INFO][4957] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" iface="eth0" netns="/var/run/netns/cni-4036265c-1150-6f63-581d-4ec27b4bda90" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.943 [INFO][4957] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.943 [INFO][4957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.968 [INFO][4964] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" HandleID="k8s-pod-network.815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.968 [INFO][4964] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.968 [INFO][4964] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.976 [WARNING][4964] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" HandleID="k8s-pod-network.815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.976 [INFO][4964] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" HandleID="k8s-pod-network.815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.977 [INFO][4964] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:20.984665 containerd[1830]: 2026-01-28 01:27:20.982 [INFO][4957] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:20.984665 containerd[1830]: time="2026-01-28T01:27:20.984358408Z" level=info msg="TearDown network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\" successfully" Jan 28 01:27:20.984665 containerd[1830]: time="2026-01-28T01:27:20.984389928Z" level=info msg="StopPodSandbox for \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\" returns successfully" Jan 28 01:27:20.988104 containerd[1830]: time="2026-01-28T01:27:20.987491803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c96dcf9cd-wgrwx,Uid:1e049504-bcc6-4539-aec9-ba0a3a0b4d66,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:27:20.989381 systemd[1]: run-netns-cni\x2d4036265c\x2d1150\x2d6f63\x2d581d\x2d4ec27b4bda90.mount: Deactivated successfully. Jan 28 01:27:21.167819 kubelet[3322]: I0128 01:27:21.167131 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hb25f" podStartSLOduration=49.167111855 podStartE2EDuration="49.167111855s" podCreationTimestamp="2026-01-28 01:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:27:21.166245536 +0000 UTC m=+54.390190976" watchObservedRunningTime="2026-01-28 01:27:21.167111855 +0000 UTC m=+54.391057215" Jan 28 01:27:21.306332 systemd-networkd[1398]: cali588023ce22d: Link UP Jan 28 01:27:21.307275 systemd-networkd[1398]: cali588023ce22d: Gained carrier Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.237 [INFO][4973] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0 calico-apiserver-c96dcf9cd- calico-apiserver 1e049504-bcc6-4539-aec9-ba0a3a0b4d66 978 0 2026-01-28 01:26:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c96dcf9cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-11aaf12d54 calico-apiserver-c96dcf9cd-wgrwx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali588023ce22d [] [] }} ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-wgrwx" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.237 [INFO][4973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-wgrwx" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.261 [INFO][4987] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" HandleID="k8s-pod-network.d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.261 [INFO][4987] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" HandleID="k8s-pod-network.d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-11aaf12d54", "pod":"calico-apiserver-c96dcf9cd-wgrwx", "timestamp":"2026-01-28 01:27:21.261591954 +0000 UTC"}, Hostname:"ci-4081.3.6-n-11aaf12d54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.261 [INFO][4987] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.261 [INFO][4987] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.261 [INFO][4987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-11aaf12d54' Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.270 [INFO][4987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.273 [INFO][4987] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.278 [INFO][4987] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.279 [INFO][4987] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.281 [INFO][4987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.281 [INFO][4987] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.282 [INFO][4987] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.287 [INFO][4987] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.296 [INFO][4987] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.131/26] block=192.168.96.128/26 handle="k8s-pod-network.d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.296 [INFO][4987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.131/26] handle="k8s-pod-network.d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.296 [INFO][4987] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:21.331294 containerd[1830]: 2026-01-28 01:27:21.296 [INFO][4987] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.131/26] IPv6=[] ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" HandleID="k8s-pod-network.d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:21.332009 containerd[1830]: 2026-01-28 01:27:21.299 [INFO][4973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-wgrwx" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0", GenerateName:"calico-apiserver-c96dcf9cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e049504-bcc6-4539-aec9-ba0a3a0b4d66", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c96dcf9cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"", Pod:"calico-apiserver-c96dcf9cd-wgrwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali588023ce22d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:21.332009 containerd[1830]: 2026-01-28 01:27:21.299 [INFO][4973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.131/32] ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-wgrwx" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:21.332009 containerd[1830]: 2026-01-28 01:27:21.299 [INFO][4973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali588023ce22d ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-wgrwx" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:21.332009 containerd[1830]: 2026-01-28 01:27:21.308 [INFO][4973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-wgrwx" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:21.332009 containerd[1830]: 2026-01-28 01:27:21.310 [INFO][4973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-wgrwx" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0", GenerateName:"calico-apiserver-c96dcf9cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e049504-bcc6-4539-aec9-ba0a3a0b4d66", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c96dcf9cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c", Pod:"calico-apiserver-c96dcf9cd-wgrwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali588023ce22d", MAC:"6e:6c:21:64:af:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:21.332009 containerd[1830]: 2026-01-28 01:27:21.326 [INFO][4973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-wgrwx" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:21.362305 containerd[1830]: time="2026-01-28T01:27:21.362192044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:27:21.362305 containerd[1830]: time="2026-01-28T01:27:21.362258564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:27:21.362305 containerd[1830]: time="2026-01-28T01:27:21.362270003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:21.362572 containerd[1830]: time="2026-01-28T01:27:21.362347443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:21.411980 containerd[1830]: time="2026-01-28T01:27:21.411938849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c96dcf9cd-wgrwx,Uid:1e049504-bcc6-4539-aec9-ba0a3a0b4d66,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c\"" Jan 28 01:27:21.413790 containerd[1830]: time="2026-01-28T01:27:21.413761327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:21.666104 containerd[1830]: time="2026-01-28T01:27:21.665917990Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:21.669568 containerd[1830]: time="2026-01-28T01:27:21.669440265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:21.669568 containerd[1830]: time="2026-01-28T01:27:21.669478065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:21.669735 kubelet[3322]: E0128 01:27:21.669686 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:21.669786 kubelet[3322]: E0128 01:27:21.669737 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:21.675487 kubelet[3322]: E0128 01:27:21.675430 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mcz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c96dcf9cd-wgrwx_calico-apiserver(1e049504-bcc6-4539-aec9-ba0a3a0b4d66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:21.676644 kubelet[3322]: E0128 01:27:21.676608 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:27:21.828296 systemd-networkd[1398]: cali0fcefe358de: Gained IPv6LL Jan 28 01:27:21.897313 containerd[1830]: time="2026-01-28T01:27:21.896876285Z" level=info msg="StopPodSandbox for \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\"" Jan 28 01:27:21.897313 containerd[1830]: time="2026-01-28T01:27:21.896909005Z" level=info msg="StopPodSandbox for \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\"" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.958 [INFO][5066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.958 [INFO][5066] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" iface="eth0" netns="/var/run/netns/cni-c2f7e7f6-03ad-0007-0fcb-a23cc5ebbe93" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.959 [INFO][5066] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" iface="eth0" netns="/var/run/netns/cni-c2f7e7f6-03ad-0007-0fcb-a23cc5ebbe93" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.959 [INFO][5066] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" iface="eth0" netns="/var/run/netns/cni-c2f7e7f6-03ad-0007-0fcb-a23cc5ebbe93" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.960 [INFO][5066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.960 [INFO][5066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.987 [INFO][5079] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" HandleID="k8s-pod-network.0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.987 [INFO][5079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.987 [INFO][5079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.997 [WARNING][5079] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" HandleID="k8s-pod-network.0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:21.997 [INFO][5079] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" HandleID="k8s-pod-network.0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:22.002 [INFO][5079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:22.008258 containerd[1830]: 2026-01-28 01:27:22.005 [INFO][5066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:22.008258 containerd[1830]: time="2026-01-28T01:27:22.007063880Z" level=info msg="TearDown network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\" successfully" Jan 28 01:27:22.008258 containerd[1830]: time="2026-01-28T01:27:22.007089240Z" level=info msg="StopPodSandbox for \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\" returns successfully" Jan 28 01:27:22.008258 containerd[1830]: time="2026-01-28T01:27:22.007803319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-48qp6,Uid:4cbfc031-de2b-4325-b7c3-25699b54c64a,Namespace:kube-system,Attempt:1,}" Jan 28 01:27:22.015440 systemd[1]: run-netns-cni\x2dc2f7e7f6\x2d03ad\x2d0007\x2d0fcb\x2da23cc5ebbe93.mount: Deactivated successfully. Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:21.964 [INFO][5067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:21.964 [INFO][5067] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" iface="eth0" netns="/var/run/netns/cni-aceb7efc-a2f8-9482-f893-a8c3b1632621" Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:21.965 [INFO][5067] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" iface="eth0" netns="/var/run/netns/cni-aceb7efc-a2f8-9482-f893-a8c3b1632621" Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:21.965 [INFO][5067] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" iface="eth0" netns="/var/run/netns/cni-aceb7efc-a2f8-9482-f893-a8c3b1632621" Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:21.965 [INFO][5067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:21.965 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:22.000 [INFO][5084] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" HandleID="k8s-pod-network.fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:22.000 [INFO][5084] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:22.003 [INFO][5084] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:22.020 [WARNING][5084] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" HandleID="k8s-pod-network.fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:22.020 [INFO][5084] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" HandleID="k8s-pod-network.fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:22.021 [INFO][5084] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:22.025194 containerd[1830]: 2026-01-28 01:27:22.023 [INFO][5067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:22.027997 containerd[1830]: time="2026-01-28T01:27:22.027202330Z" level=info msg="TearDown network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\" successfully" Jan 28 01:27:22.027997 containerd[1830]: time="2026-01-28T01:27:22.027238370Z" level=info msg="StopPodSandbox for \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\" returns successfully" Jan 28 01:27:22.027997 containerd[1830]: time="2026-01-28T01:27:22.027959409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cvf4,Uid:0a3f2b82-9dfb-45f4-8480-07421e1f39e6,Namespace:calico-system,Attempt:1,}" Jan 28 01:27:22.027767 systemd[1]: run-netns-cni\x2daceb7efc\x2da2f8\x2d9482\x2df893\x2da8c3b1632621.mount: Deactivated successfully. Jan 28 01:27:22.155745 kubelet[3322]: E0128 01:27:22.155670 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:27:22.468259 systemd-networkd[1398]: cali588023ce22d: Gained IPv6LL Jan 28 01:27:22.708249 systemd-networkd[1398]: cali12263cbb543: Link UP Jan 28 01:27:22.708976 systemd-networkd[1398]: cali12263cbb543: Gained carrier Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.642 [INFO][5095] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0 csi-node-driver- calico-system 0a3f2b82-9dfb-45f4-8480-07421e1f39e6 999 0 2026-01-28 01:26:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-11aaf12d54 csi-node-driver-7cvf4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali12263cbb543 [] [] }} ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Namespace="calico-system" Pod="csi-node-driver-7cvf4" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.643 [INFO][5095] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Namespace="calico-system" Pod="csi-node-driver-7cvf4" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.666 [INFO][5107] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" HandleID="k8s-pod-network.7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.666 [INFO][5107] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" HandleID="k8s-pod-network.7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-11aaf12d54", "pod":"csi-node-driver-7cvf4", "timestamp":"2026-01-28 01:27:22.666741135 +0000 UTC"}, Hostname:"ci-4081.3.6-n-11aaf12d54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.667 [INFO][5107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.667 [INFO][5107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.667 [INFO][5107] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-11aaf12d54' Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.675 [INFO][5107] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.678 [INFO][5107] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.681 [INFO][5107] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.682 [INFO][5107] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.684 [INFO][5107] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.684 [INFO][5107] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.685 [INFO][5107] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.690 [INFO][5107] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.700 [INFO][5107] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.132/26] block=192.168.96.128/26 handle="k8s-pod-network.7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.700 [INFO][5107] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.132/26] handle="k8s-pod-network.7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.700 [INFO][5107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:22.730359 containerd[1830]: 2026-01-28 01:27:22.701 [INFO][5107] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.132/26] IPv6=[] ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" HandleID="k8s-pod-network.7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.733842 containerd[1830]: 2026-01-28 01:27:22.704 [INFO][5095] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Namespace="calico-system" Pod="csi-node-driver-7cvf4" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a3f2b82-9dfb-45f4-8480-07421e1f39e6", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"", Pod:"csi-node-driver-7cvf4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali12263cbb543", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:22.733842 containerd[1830]: 2026-01-28 01:27:22.705 [INFO][5095] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.132/32] ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Namespace="calico-system" Pod="csi-node-driver-7cvf4" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.733842 containerd[1830]: 2026-01-28 01:27:22.705 [INFO][5095] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12263cbb543 ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Namespace="calico-system" Pod="csi-node-driver-7cvf4" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.733842 containerd[1830]: 2026-01-28 01:27:22.709 [INFO][5095] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Namespace="calico-system" Pod="csi-node-driver-7cvf4" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.733842 containerd[1830]: 2026-01-28 01:27:22.709 [INFO][5095] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Namespace="calico-system" Pod="csi-node-driver-7cvf4" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a3f2b82-9dfb-45f4-8480-07421e1f39e6", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e", Pod:"csi-node-driver-7cvf4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali12263cbb543", MAC:"5a:4d:4f:4a:d8:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:22.733842 containerd[1830]: 2026-01-28 01:27:22.726 [INFO][5095] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e" Namespace="calico-system" Pod="csi-node-driver-7cvf4" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:22.763288 containerd[1830]: time="2026-01-28T01:27:22.763046511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:27:22.763288 containerd[1830]: time="2026-01-28T01:27:22.763096391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:27:22.763288 containerd[1830]: time="2026-01-28T01:27:22.763124071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:22.763514 containerd[1830]: time="2026-01-28T01:27:22.763244551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:22.816632 containerd[1830]: time="2026-01-28T01:27:22.816592031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cvf4,Uid:0a3f2b82-9dfb-45f4-8480-07421e1f39e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e\"" Jan 28 01:27:22.818965 containerd[1830]: time="2026-01-28T01:27:22.818938708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:27:22.849692 systemd-networkd[1398]: cali4201a53ca6f: Link UP Jan 28 01:27:22.857685 systemd-networkd[1398]: cali4201a53ca6f: Gained carrier Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.750 [INFO][5114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0 coredns-668d6bf9bc- kube-system 4cbfc031-de2b-4325-b7c3-25699b54c64a 998 0 2026-01-28 01:26:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-11aaf12d54 coredns-668d6bf9bc-48qp6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4201a53ca6f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Namespace="kube-system" Pod="coredns-668d6bf9bc-48qp6" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.750 [INFO][5114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Namespace="kube-system" Pod="coredns-668d6bf9bc-48qp6" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.784 [INFO][5142] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" HandleID="k8s-pod-network.6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.784 [INFO][5142] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" HandleID="k8s-pod-network.6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-11aaf12d54", "pod":"coredns-668d6bf9bc-48qp6", "timestamp":"2026-01-28 01:27:22.784706679 +0000 UTC"}, Hostname:"ci-4081.3.6-n-11aaf12d54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.785 [INFO][5142] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.785 [INFO][5142] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.785 [INFO][5142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-11aaf12d54' Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.795 [INFO][5142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.804 [INFO][5142] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.813 [INFO][5142] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.815 [INFO][5142] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.817 [INFO][5142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.817 [INFO][5142] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.823 [INFO][5142] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466 Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.828 [INFO][5142] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.837 [INFO][5142] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.133/26] block=192.168.96.128/26 handle="k8s-pod-network.6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.838 [INFO][5142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.133/26] handle="k8s-pod-network.6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.838 [INFO][5142] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:22.872720 containerd[1830]: 2026-01-28 01:27:22.838 [INFO][5142] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.133/26] IPv6=[] ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" HandleID="k8s-pod-network.6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.873240 containerd[1830]: 2026-01-28 01:27:22.844 [INFO][5114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Namespace="kube-system" Pod="coredns-668d6bf9bc-48qp6" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cbfc031-de2b-4325-b7c3-25699b54c64a", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"", Pod:"coredns-668d6bf9bc-48qp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4201a53ca6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:22.873240 containerd[1830]: 2026-01-28 01:27:22.845 [INFO][5114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.133/32] ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Namespace="kube-system" Pod="coredns-668d6bf9bc-48qp6" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.873240 containerd[1830]: 2026-01-28 01:27:22.845 [INFO][5114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4201a53ca6f ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Namespace="kube-system" Pod="coredns-668d6bf9bc-48qp6" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.873240 containerd[1830]: 2026-01-28 01:27:22.855 [INFO][5114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Namespace="kube-system" Pod="coredns-668d6bf9bc-48qp6" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.873240 containerd[1830]: 2026-01-28 01:27:22.855 [INFO][5114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Namespace="kube-system" Pod="coredns-668d6bf9bc-48qp6" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cbfc031-de2b-4325-b7c3-25699b54c64a", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466", Pod:"coredns-668d6bf9bc-48qp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4201a53ca6f", MAC:"e6:21:ed:8f:cd:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:22.873240 containerd[1830]: 2026-01-28 01:27:22.870 [INFO][5114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466" Namespace="kube-system" Pod="coredns-668d6bf9bc-48qp6" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:22.899023 containerd[1830]: time="2026-01-28T01:27:22.898950868Z" level=info msg="StopPodSandbox for \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\"" Jan 28 01:27:22.905009 containerd[1830]: time="2026-01-28T01:27:22.904677540Z" level=info msg="StopPodSandbox for \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\"" Jan 28 01:27:22.925122 containerd[1830]: time="2026-01-28T01:27:22.925003670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:27:22.925254 containerd[1830]: time="2026-01-28T01:27:22.925178469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:27:22.925254 containerd[1830]: time="2026-01-28T01:27:22.925211149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:22.926208 containerd[1830]: time="2026-01-28T01:27:22.925921788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.010 [INFO][5220] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.010 [INFO][5220] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" iface="eth0" netns="/var/run/netns/cni-546c6623-2979-db93-78b7-e69a53d83461" Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.011 [INFO][5220] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" iface="eth0" netns="/var/run/netns/cni-546c6623-2979-db93-78b7-e69a53d83461" Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.012 [INFO][5220] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" iface="eth0" netns="/var/run/netns/cni-546c6623-2979-db93-78b7-e69a53d83461" Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.012 [INFO][5220] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.013 [INFO][5220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.038 [INFO][5259] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" HandleID="k8s-pod-network.f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.038 [INFO][5259] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.038 [INFO][5259] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.047 [WARNING][5259] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" HandleID="k8s-pod-network.f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.047 [INFO][5259] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" HandleID="k8s-pod-network.f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.049 [INFO][5259] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:23.054374 containerd[1830]: 2026-01-28 01:27:23.050 [INFO][5220] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:27:23.060464 systemd[1]: run-netns-cni\x2d546c6623\x2d2979\x2ddb93\x2d78b7\x2de69a53d83461.mount: Deactivated successfully. Jan 28 01:27:23.062239 containerd[1830]: time="2026-01-28T01:27:23.061229186Z" level=info msg="TearDown network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\" successfully" Jan 28 01:27:23.062239 containerd[1830]: time="2026-01-28T01:27:23.061262826Z" level=info msg="StopPodSandbox for \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\" returns successfully" Jan 28 01:27:23.062783 containerd[1830]: time="2026-01-28T01:27:23.062761104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79785b9fc9-s4g8w,Uid:c1a44d94-a633-4913-abd2-c73f93d95c86,Namespace:calico-system,Attempt:1,}" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.016 [INFO][5236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.017 [INFO][5236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" iface="eth0" netns="/var/run/netns/cni-1d0770f4-b776-9341-46f5-38c366873dac" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.017 [INFO][5236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" iface="eth0" netns="/var/run/netns/cni-1d0770f4-b776-9341-46f5-38c366873dac" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.018 [INFO][5236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" iface="eth0" netns="/var/run/netns/cni-1d0770f4-b776-9341-46f5-38c366873dac" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.018 [INFO][5236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.018 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.042 [INFO][5264] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" HandleID="k8s-pod-network.e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.042 [INFO][5264] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.049 [INFO][5264] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.066 [WARNING][5264] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" HandleID="k8s-pod-network.e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.066 [INFO][5264] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" HandleID="k8s-pod-network.e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.067 [INFO][5264] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:23.070807 containerd[1830]: 2026-01-28 01:27:23.069 [INFO][5236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:27:23.072996 containerd[1830]: time="2026-01-28T01:27:23.072966089Z" level=info msg="TearDown network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\" successfully" Jan 28 01:27:23.073085 containerd[1830]: time="2026-01-28T01:27:23.073072328Z" level=info msg="StopPodSandbox for \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\" returns successfully" Jan 28 01:27:23.074551 containerd[1830]: time="2026-01-28T01:27:23.074527606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bsx7f,Uid:765006a4-4116-4fc5-bb5f-ca94978ecdd0,Namespace:calico-system,Attempt:1,}" Jan 28 01:27:23.074903 systemd[1]: run-netns-cni\x2d1d0770f4\x2db776\x2d9341\x2d46f5\x2d38c366873dac.mount: Deactivated successfully. Jan 28 01:27:23.083026 containerd[1830]: time="2026-01-28T01:27:23.082993514Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:23.161472 kubelet[3322]: E0128 01:27:23.161434 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:27:23.617382 containerd[1830]: time="2026-01-28T01:27:23.617283236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-48qp6,Uid:4cbfc031-de2b-4325-b7c3-25699b54c64a,Namespace:kube-system,Attempt:1,} returns sandbox id \"6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466\"" Jan 28 01:27:23.621043 containerd[1830]: time="2026-01-28T01:27:23.620940670Z" level=info msg="CreateContainer within sandbox \"6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:27:23.876349 systemd-networkd[1398]: cali12263cbb543: Gained IPv6LL Jan 28 01:27:24.897133 containerd[1830]: time="2026-01-28T01:27:24.897090964Z" level=info msg="StopPodSandbox for \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\"" Jan 28 01:27:24.900863 systemd-networkd[1398]: cali4201a53ca6f: Gained IPv6LL Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.941 [INFO][5288] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.941 [INFO][5288] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" iface="eth0" netns="/var/run/netns/cni-879073d9-a6aa-8103-bf38-f42bc1f0002f" Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.942 [INFO][5288] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" iface="eth0" netns="/var/run/netns/cni-879073d9-a6aa-8103-bf38-f42bc1f0002f" Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.944 [INFO][5288] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" iface="eth0" netns="/var/run/netns/cni-879073d9-a6aa-8103-bf38-f42bc1f0002f" Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.944 [INFO][5288] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.944 [INFO][5288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.969 [INFO][5295] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" HandleID="k8s-pod-network.12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.969 [INFO][5295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.969 [INFO][5295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.977 [WARNING][5295] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" HandleID="k8s-pod-network.12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.977 [INFO][5295] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" HandleID="k8s-pod-network.12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.978 [INFO][5295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:24.982250 containerd[1830]: 2026-01-28 01:27:24.980 [INFO][5288] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:27:24.984430 containerd[1830]: time="2026-01-28T01:27:24.984276434Z" level=info msg="TearDown network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\" successfully" Jan 28 01:27:24.984430 containerd[1830]: time="2026-01-28T01:27:24.984309314Z" level=info msg="StopPodSandbox for \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\" returns successfully" Jan 28 01:27:24.985748 systemd[1]: run-netns-cni\x2d879073d9\x2da6aa\x2d8103\x2dbf38\x2df42bc1f0002f.mount: Deactivated successfully. Jan 28 01:27:24.987368 containerd[1830]: time="2026-01-28T01:27:24.987035710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c96dcf9cd-km68f,Uid:059763c9-3c80-4151-8ab7-5e7bceb1fb9d,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:27:26.874614 containerd[1830]: time="2026-01-28T01:27:26.874576451Z" level=info msg="StopPodSandbox for \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\"" Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.920 [WARNING][5318] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.920 [INFO][5318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.920 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" iface="eth0" netns="" Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.920 [INFO][5318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.920 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.938 [INFO][5327] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" HandleID="k8s-pod-network.fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.938 [INFO][5327] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.938 [INFO][5327] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.947 [WARNING][5327] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" HandleID="k8s-pod-network.fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.947 [INFO][5327] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" HandleID="k8s-pod-network.fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.949 [INFO][5327] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:26.953367 containerd[1830]: 2026-01-28 01:27:26.951 [INFO][5318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:26.954007 containerd[1830]: time="2026-01-28T01:27:26.953406293Z" level=info msg="TearDown network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\" successfully" Jan 28 01:27:26.954007 containerd[1830]: time="2026-01-28T01:27:26.953432093Z" level=info msg="StopPodSandbox for \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\" returns successfully" Jan 28 01:27:26.954323 containerd[1830]: time="2026-01-28T01:27:26.954208452Z" level=info msg="RemovePodSandbox for \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\"" Jan 28 01:27:26.954323 containerd[1830]: time="2026-01-28T01:27:26.954239132Z" level=info msg="Forcibly stopping sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\"" Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:26.984 [WARNING][5341] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:26.984 [INFO][5341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:26.984 [INFO][5341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" iface="eth0" netns="" Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:26.984 [INFO][5341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:26.984 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:27.006 [INFO][5348] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" HandleID="k8s-pod-network.fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:27.006 [INFO][5348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:27.006 [INFO][5348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:27.016 [WARNING][5348] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" HandleID="k8s-pod-network.fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:27.016 [INFO][5348] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" HandleID="k8s-pod-network.fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Workload="ci--4081.3.6--n--11aaf12d54-k8s-whisker--5b9c57688c--5cnwf-eth0" Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:27.018 [INFO][5348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:27.022340 containerd[1830]: 2026-01-28 01:27:27.020 [INFO][5341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59" Jan 28 01:27:27.022340 containerd[1830]: time="2026-01-28T01:27:27.022243630Z" level=info msg="TearDown network for sandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\" successfully" Jan 28 01:27:27.696348 containerd[1830]: time="2026-01-28T01:27:27.696163624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:27:27.696348 containerd[1830]: time="2026-01-28T01:27:27.696286864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:27:27.955303 containerd[1830]: time="2026-01-28T01:27:27.705350970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:27:27.955473 kubelet[3322]: E0128 01:27:27.697794 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:27:27.955473 kubelet[3322]: E0128 01:27:27.698067 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:27:27.955473 kubelet[3322]: E0128 01:27:27.699838 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t26q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:28.406935 containerd[1830]: time="2026-01-28T01:27:28.406888882Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:31.856597 containerd[1830]: time="2026-01-28T01:27:31.856444903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:27:31.856597 containerd[1830]: time="2026-01-28T01:27:31.856560583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:27:31.857028 kubelet[3322]: E0128 01:27:31.856746 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:27:31.857028 kubelet[3322]: E0128 01:27:31.856831 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:27:31.857297 kubelet[3322]: E0128 01:27:31.857053 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t26q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:31.858437 kubelet[3322]: E0128 01:27:31.858341 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:31.862627 containerd[1830]: time="2026-01-28T01:27:31.862483614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:27:32.048629 containerd[1830]: time="2026-01-28T01:27:32.048552622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:32.049085 containerd[1830]: time="2026-01-28T01:27:32.048762502Z" level=info msg="RemovePodSandbox \"fd27c58882f83989182b197c56358039de89736c6d0748039064c514252eef59\" returns successfully" Jan 28 01:27:32.049395 containerd[1830]: time="2026-01-28T01:27:32.049289221Z" level=info msg="StopPodSandbox for \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\"" Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.080 [WARNING][5364] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cbfc031-de2b-4325-b7c3-25699b54c64a", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466", Pod:"coredns-668d6bf9bc-48qp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4201a53ca6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.080 [INFO][5364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.080 [INFO][5364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" iface="eth0" netns="" Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.080 [INFO][5364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.080 [INFO][5364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.097 [INFO][5372] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" HandleID="k8s-pod-network.0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.097 [INFO][5372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.097 [INFO][5372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.105 [WARNING][5372] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" HandleID="k8s-pod-network.0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.105 [INFO][5372] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" HandleID="k8s-pod-network.0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.106 [INFO][5372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:32.109828 containerd[1830]: 2026-01-28 01:27:32.108 [INFO][5364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:32.110569 containerd[1830]: time="2026-01-28T01:27:32.110286092Z" level=info msg="TearDown network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\" successfully" Jan 28 01:27:32.110569 containerd[1830]: time="2026-01-28T01:27:32.110326691Z" level=info msg="StopPodSandbox for \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\" returns successfully" Jan 28 01:27:32.111273 containerd[1830]: time="2026-01-28T01:27:32.110995810Z" level=info msg="RemovePodSandbox for \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\"" Jan 28 01:27:32.111273 containerd[1830]: time="2026-01-28T01:27:32.111024010Z" level=info msg="Forcibly stopping sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\"" Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.143 [WARNING][5387] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4cbfc031-de2b-4325-b7c3-25699b54c64a", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466", Pod:"coredns-668d6bf9bc-48qp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4201a53ca6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.144 [INFO][5387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.144 [INFO][5387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" iface="eth0" netns="" Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.144 [INFO][5387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.144 [INFO][5387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.163 [INFO][5394] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" HandleID="k8s-pod-network.0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.163 [INFO][5394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.163 [INFO][5394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.171 [WARNING][5394] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" HandleID="k8s-pod-network.0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.171 [INFO][5394] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" HandleID="k8s-pod-network.0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--48qp6-eth0" Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.172 [INFO][5394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:32.175393 containerd[1830]: 2026-01-28 01:27:32.173 [INFO][5387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844" Jan 28 01:27:32.175865 containerd[1830]: time="2026-01-28T01:27:32.175433716Z" level=info msg="TearDown network for sandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\" successfully" Jan 28 01:27:32.181384 kubelet[3322]: E0128 01:27:32.181343 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:32.259936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616567980.mount: Deactivated successfully. Jan 28 01:27:32.610920 containerd[1830]: time="2026-01-28T01:27:32.610878599Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:32.909520 containerd[1830]: time="2026-01-28T01:27:32.909307722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:27:32.909520 containerd[1830]: time="2026-01-28T01:27:32.909426882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:27:32.910529 kubelet[3322]: E0128 01:27:32.909821 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:32.910529 kubelet[3322]: E0128 01:27:32.909866 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:32.910529 kubelet[3322]: E0128 01:27:32.909965 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a9c0213a57f46a6a663e5576055455b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc274,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54666d9c5f-8kw6q_calico-system(1c968694-35fa-404a-bc2c-7a251b92bedd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:32.917748 containerd[1830]: time="2026-01-28T01:27:32.917587590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:27:32.944440 systemd-networkd[1398]: cali2d2a3013aad: Link UP Jan 28 01:27:32.946609 systemd-networkd[1398]: cali2d2a3013aad: Gained carrier Jan 28 01:27:32.994299 containerd[1830]: time="2026-01-28T01:27:32.994257318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:32.994549 containerd[1830]: time="2026-01-28T01:27:32.994460838Z" level=info msg="RemovePodSandbox \"0aabb2384ec451c58375af7f06c27ee473a80ead17ff2b5e3c53bdd97dbe7844\" returns successfully" Jan 28 01:27:32.995256 containerd[1830]: time="2026-01-28T01:27:32.995222597Z" level=info msg="StopPodSandbox for \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\"" Jan 28 01:27:32.996483 containerd[1830]: time="2026-01-28T01:27:32.996039875Z" level=info msg="CreateContainer within sandbox \"6cca3c331f0802a2675008f5dc3a39aa0f97837d9f6c215bfa176e9ff2705466\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c28a0c9f4f8f275983118dd8ebf9e047f65008eb3a09c0cc2cb9facba773b061\"" Jan 28 01:27:32.999370 containerd[1830]: time="2026-01-28T01:27:32.998519352Z" level=info msg="StartContainer for \"c28a0c9f4f8f275983118dd8ebf9e047f65008eb3a09c0cc2cb9facba773b061\"" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.833 [INFO][5401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0 goldmane-666569f655- calico-system 765006a4-4116-4fc5-bb5f-ca94978ecdd0 1015 0 2026-01-28 01:26:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-11aaf12d54 goldmane-666569f655-bsx7f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2d2a3013aad [] [] }} ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Namespace="calico-system" Pod="goldmane-666569f655-bsx7f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.833 [INFO][5401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Namespace="calico-system" Pod="goldmane-666569f655-bsx7f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.867 [INFO][5414] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" HandleID="k8s-pod-network.70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.868 [INFO][5414] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" HandleID="k8s-pod-network.70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-11aaf12d54", "pod":"goldmane-666569f655-bsx7f", "timestamp":"2026-01-28 01:27:32.867720503 +0000 UTC"}, Hostname:"ci-4081.3.6-n-11aaf12d54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.869 [INFO][5414] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.869 [INFO][5414] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.869 [INFO][5414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-11aaf12d54' Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.878 [INFO][5414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.884 [INFO][5414] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.895 [INFO][5414] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.899 [INFO][5414] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.907 [INFO][5414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.913 [INFO][5414] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.920 [INFO][5414] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.927 [INFO][5414] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.940 [INFO][5414] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.134/26] block=192.168.96.128/26 handle="k8s-pod-network.70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.940 [INFO][5414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.134/26] handle="k8s-pod-network.70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.940 [INFO][5414] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:32.999750 containerd[1830]: 2026-01-28 01:27:32.940 [INFO][5414] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.134/26] IPv6=[] ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" HandleID="k8s-pod-network.70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:33.000560 containerd[1830]: 2026-01-28 01:27:32.942 [INFO][5401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Namespace="calico-system" Pod="goldmane-666569f655-bsx7f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"765006a4-4116-4fc5-bb5f-ca94978ecdd0", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"", Pod:"goldmane-666569f655-bsx7f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d2a3013aad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:33.000560 containerd[1830]: 2026-01-28 01:27:32.942 [INFO][5401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.134/32] ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Namespace="calico-system" Pod="goldmane-666569f655-bsx7f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:33.000560 containerd[1830]: 2026-01-28 01:27:32.942 [INFO][5401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d2a3013aad ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Namespace="calico-system" Pod="goldmane-666569f655-bsx7f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:33.000560 containerd[1830]: 2026-01-28 01:27:32.946 [INFO][5401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Namespace="calico-system" Pod="goldmane-666569f655-bsx7f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:33.000560 containerd[1830]: 2026-01-28 01:27:32.946 [INFO][5401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Namespace="calico-system" Pod="goldmane-666569f655-bsx7f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"765006a4-4116-4fc5-bb5f-ca94978ecdd0", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a", Pod:"goldmane-666569f655-bsx7f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d2a3013aad", MAC:"42:95:57:59:c7:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:33.000560 containerd[1830]: 2026-01-28 01:27:32.966 [INFO][5401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a" Namespace="calico-system" Pod="goldmane-666569f655-bsx7f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:27:33.084877 systemd-networkd[1398]: califadd15e77a9: Link UP Jan 28 01:27:33.092789 systemd-networkd[1398]: califadd15e77a9: Gained carrier Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:32.921 [INFO][5424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0 calico-kube-controllers-79785b9fc9- calico-system c1a44d94-a633-4913-abd2-c73f93d95c86 1014 0 2026-01-28 01:26:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79785b9fc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-11aaf12d54 calico-kube-controllers-79785b9fc9-s4g8w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califadd15e77a9 [] [] }} ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Namespace="calico-system" Pod="calico-kube-controllers-79785b9fc9-s4g8w" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:32.921 [INFO][5424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Namespace="calico-system" Pod="calico-kube-controllers-79785b9fc9-s4g8w" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:32.979 [INFO][5437] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" HandleID="k8s-pod-network.aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:32.994 [INFO][5437] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" HandleID="k8s-pod-network.aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3c60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-11aaf12d54", "pod":"calico-kube-controllers-79785b9fc9-s4g8w", "timestamp":"2026-01-28 01:27:32.9790411 +0000 UTC"}, Hostname:"ci-4081.3.6-n-11aaf12d54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:32.994 [INFO][5437] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:32.994 [INFO][5437] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:32.994 [INFO][5437] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-11aaf12d54' Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.013 [INFO][5437] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.020 [INFO][5437] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.026 [INFO][5437] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.028 [INFO][5437] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.030 [INFO][5437] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.031 [INFO][5437] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.034 [INFO][5437] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678 Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.049 [INFO][5437] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.068 [INFO][5437] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.135/26] block=192.168.96.128/26 handle="k8s-pod-network.aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.070 [INFO][5437] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.135/26] handle="k8s-pod-network.aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.070 [INFO][5437] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:33.128248 containerd[1830]: 2026-01-28 01:27:33.071 [INFO][5437] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.135/26] IPv6=[] ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" HandleID="k8s-pod-network.aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:33.132244 containerd[1830]: 2026-01-28 01:27:33.075 [INFO][5424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Namespace="calico-system" Pod="calico-kube-controllers-79785b9fc9-s4g8w" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0", GenerateName:"calico-kube-controllers-79785b9fc9-", Namespace:"calico-system", SelfLink:"", UID:"c1a44d94-a633-4913-abd2-c73f93d95c86", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79785b9fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"", Pod:"calico-kube-controllers-79785b9fc9-s4g8w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califadd15e77a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:33.132244 containerd[1830]: 2026-01-28 01:27:33.075 [INFO][5424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.135/32] ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Namespace="calico-system" Pod="calico-kube-controllers-79785b9fc9-s4g8w" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:33.132244 containerd[1830]: 2026-01-28 01:27:33.075 [INFO][5424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califadd15e77a9 ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Namespace="calico-system" Pod="calico-kube-controllers-79785b9fc9-s4g8w" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:33.132244 containerd[1830]: 2026-01-28 01:27:33.095 [INFO][5424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Namespace="calico-system" Pod="calico-kube-controllers-79785b9fc9-s4g8w" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:33.132244 containerd[1830]: 2026-01-28 01:27:33.097 [INFO][5424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Namespace="calico-system" Pod="calico-kube-controllers-79785b9fc9-s4g8w" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0", GenerateName:"calico-kube-controllers-79785b9fc9-", Namespace:"calico-system", SelfLink:"", UID:"c1a44d94-a633-4913-abd2-c73f93d95c86", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79785b9fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678", Pod:"calico-kube-controllers-79785b9fc9-s4g8w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califadd15e77a9", MAC:"3e:c1:5f:bc:b0:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:33.132244 containerd[1830]: 2026-01-28 01:27:33.119 [INFO][5424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678" Namespace="calico-system" Pod="calico-kube-controllers-79785b9fc9-s4g8w" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:27:33.198189 containerd[1830]: time="2026-01-28T01:27:33.198064780Z" level=info msg="StartContainer for \"c28a0c9f4f8f275983118dd8ebf9e047f65008eb3a09c0cc2cb9facba773b061\" returns successfully" Jan 28 01:27:33.203384 containerd[1830]: time="2026-01-28T01:27:33.203017812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:27:33.203384 containerd[1830]: time="2026-01-28T01:27:33.203239652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:27:33.203384 containerd[1830]: time="2026-01-28T01:27:33.203260532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:33.203820 containerd[1830]: time="2026-01-28T01:27:33.203777011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.117 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a3f2b82-9dfb-45f4-8480-07421e1f39e6", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e", Pod:"csi-node-driver-7cvf4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali12263cbb543", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.119 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.119 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" iface="eth0" netns="" Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.119 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.119 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.191 [INFO][5519] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" HandleID="k8s-pod-network.fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.191 [INFO][5519] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.192 [INFO][5519] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.207 [WARNING][5519] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" HandleID="k8s-pod-network.fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.207 [INFO][5519] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" HandleID="k8s-pod-network.fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.208 [INFO][5519] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:33.214548 containerd[1830]: 2026-01-28 01:27:33.211 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:33.215429 containerd[1830]: time="2026-01-28T01:27:33.215117595Z" level=info msg="TearDown network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\" successfully" Jan 28 01:27:33.215429 containerd[1830]: time="2026-01-28T01:27:33.215391074Z" level=info msg="StopPodSandbox for \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\" returns successfully" Jan 28 01:27:33.216461 containerd[1830]: time="2026-01-28T01:27:33.216204433Z" level=info msg="RemovePodSandbox for \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\"" Jan 28 01:27:33.216461 containerd[1830]: time="2026-01-28T01:27:33.216234193Z" level=info msg="Forcibly stopping sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\"" Jan 28 01:27:33.277766 containerd[1830]: time="2026-01-28T01:27:33.277723583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bsx7f,Uid:765006a4-4116-4fc5-bb5f-ca94978ecdd0,Namespace:calico-system,Attempt:1,} returns sandbox id \"70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a\"" Jan 28 01:27:33.291531 containerd[1830]: time="2026-01-28T01:27:33.291153363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:27:33.291531 containerd[1830]: time="2026-01-28T01:27:33.291202723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:27:33.291531 containerd[1830]: time="2026-01-28T01:27:33.291213123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:33.291531 containerd[1830]: time="2026-01-28T01:27:33.291290243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:33.345611 systemd-networkd[1398]: cali2f83ed49f87: Link UP Jan 28 01:27:33.346815 systemd-networkd[1398]: cali2f83ed49f87: Gained carrier Jan 28 01:27:33.353940 containerd[1830]: time="2026-01-28T01:27:33.353889312Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.178 [INFO][5492] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0 calico-apiserver-c96dcf9cd- calico-apiserver 059763c9-3c80-4151-8ab7-5e7bceb1fb9d 1027 0 2026-01-28 01:26:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c96dcf9cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-11aaf12d54 calico-apiserver-c96dcf9cd-km68f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f83ed49f87 [] [] }} ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-km68f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.181 [INFO][5492] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-km68f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.253 [INFO][5550] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" HandleID="k8s-pod-network.42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.254 [INFO][5550] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" HandleID="k8s-pod-network.42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-11aaf12d54", "pod":"calico-apiserver-c96dcf9cd-km68f", "timestamp":"2026-01-28 01:27:33.253827218 +0000 UTC"}, Hostname:"ci-4081.3.6-n-11aaf12d54", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.255 [INFO][5550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.256 [INFO][5550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.257 [INFO][5550] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-11aaf12d54' Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.276 [INFO][5550] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.282 [INFO][5550] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.287 [INFO][5550] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.293 [INFO][5550] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.299 [INFO][5550] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.299 [INFO][5550] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.301 [INFO][5550] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.311 [INFO][5550] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.333 [INFO][5550] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.96.136/26] block=192.168.96.128/26 handle="k8s-pod-network.42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.333 [INFO][5550] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.136/26] handle="k8s-pod-network.42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" host="ci-4081.3.6-n-11aaf12d54" Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.333 [INFO][5550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:33.380116 containerd[1830]: 2026-01-28 01:27:33.333 [INFO][5550] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.96.136/26] IPv6=[] ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" HandleID="k8s-pod-network.42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:33.380864 containerd[1830]: 2026-01-28 01:27:33.338 [INFO][5492] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-km68f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0", GenerateName:"calico-apiserver-c96dcf9cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"059763c9-3c80-4151-8ab7-5e7bceb1fb9d", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c96dcf9cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"", Pod:"calico-apiserver-c96dcf9cd-km68f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f83ed49f87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:33.380864 containerd[1830]: 2026-01-28 01:27:33.339 [INFO][5492] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.136/32] ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-km68f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:33.380864 containerd[1830]: 2026-01-28 01:27:33.339 [INFO][5492] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f83ed49f87 ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-km68f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:33.380864 containerd[1830]: 2026-01-28 01:27:33.352 [INFO][5492] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-km68f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:33.380864 containerd[1830]: 2026-01-28 01:27:33.352 [INFO][5492] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-km68f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0", GenerateName:"calico-apiserver-c96dcf9cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"059763c9-3c80-4151-8ab7-5e7bceb1fb9d", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c96dcf9cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a", Pod:"calico-apiserver-c96dcf9cd-km68f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f83ed49f87", MAC:"7e:72:07:d5:e6:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:33.380864 containerd[1830]: 2026-01-28 01:27:33.375 [INFO][5492] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a" Namespace="calico-apiserver" Pod="calico-apiserver-c96dcf9cd-km68f" WorkloadEndpoint="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:27:33.400948 containerd[1830]: time="2026-01-28T01:27:33.400904083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79785b9fc9-s4g8w,Uid:c1a44d94-a633-4913-abd2-c73f93d95c86,Namespace:calico-system,Attempt:1,} returns sandbox id \"aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678\"" Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.331 [WARNING][5586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a3f2b82-9dfb-45f4-8480-07421e1f39e6", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"7db180dbb1545f4c4c5ba67abbb9700fe3664253ff0193662ae1651d4c79f12e", Pod:"csi-node-driver-7cvf4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali12263cbb543", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.332 [INFO][5586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.332 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" iface="eth0" netns="" Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.332 [INFO][5586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.332 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.384 [INFO][5635] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" HandleID="k8s-pod-network.fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.386 [INFO][5635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.386 [INFO][5635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.399 [WARNING][5635] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" HandleID="k8s-pod-network.fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.399 [INFO][5635] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" HandleID="k8s-pod-network.fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Workload="ci--4081.3.6--n--11aaf12d54-k8s-csi--node--driver--7cvf4-eth0" Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.401 [INFO][5635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:33.407556 containerd[1830]: 2026-01-28 01:27:33.405 [INFO][5586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05" Jan 28 01:27:33.691702 containerd[1830]: time="2026-01-28T01:27:33.644239607Z" level=info msg="TearDown network for sandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\" successfully" Jan 28 01:27:34.052285 systemd-networkd[1398]: cali2d2a3013aad: Gained IPv6LL Jan 28 01:27:34.224485 kubelet[3322]: I0128 01:27:34.224424 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-48qp6" podStartSLOduration=62.224406718 podStartE2EDuration="1m2.224406718s" podCreationTimestamp="2026-01-28 01:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:27:34.223905878 +0000 UTC m=+67.447851238" watchObservedRunningTime="2026-01-28 01:27:34.224406718 +0000 UTC m=+67.448352078" Jan 28 01:27:34.308586 systemd-networkd[1398]: califadd15e77a9: Gained IPv6LL Jan 28 01:27:34.436265 systemd-networkd[1398]: cali2f83ed49f87: Gained IPv6LL Jan 28 01:27:34.944698 containerd[1830]: time="2026-01-28T01:27:34.944476224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:27:34.944698 containerd[1830]: time="2026-01-28T01:27:34.944610464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:27:34.945268 kubelet[3322]: E0128 01:27:34.944730 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:34.945268 kubelet[3322]: E0128 01:27:34.944779 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:34.945268 kubelet[3322]: E0128 01:27:34.944963 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc274,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54666d9c5f-8kw6q_calico-system(1c968694-35fa-404a-bc2c-7a251b92bedd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:34.945505 containerd[1830]: time="2026-01-28T01:27:34.945355623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:27:34.946787 kubelet[3322]: E0128 01:27:34.946750 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54666d9c5f-8kw6q" podUID="1c968694-35fa-404a-bc2c-7a251b92bedd" Jan 28 01:27:35.558930 containerd[1830]: time="2026-01-28T01:27:35.558705245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:27:35.558930 containerd[1830]: time="2026-01-28T01:27:35.558761205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:27:35.558930 containerd[1830]: time="2026-01-28T01:27:35.558780245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:35.558930 containerd[1830]: time="2026-01-28T01:27:35.558864205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:27:35.604103 containerd[1830]: time="2026-01-28T01:27:35.604061699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c96dcf9cd-km68f,Uid:059763c9-3c80-4151-8ab7-5e7bceb1fb9d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a\"" Jan 28 01:27:35.644169 containerd[1830]: time="2026-01-28T01:27:35.644090920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:35.644169 containerd[1830]: time="2026-01-28T01:27:35.644169520Z" level=info msg="RemovePodSandbox \"fde9622d2fa9b54667701fcdffb3b5442e4f33a9e1c98dd6c060bbcc51814a05\" returns successfully" Jan 28 01:27:35.644678 containerd[1830]: time="2026-01-28T01:27:35.644657119Z" level=info msg="StopPodSandbox for \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\"" Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.679 [WARNING][5710] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02952824-c0c6-4ff4-9cb6-94e3d48d9ca2", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501", Pod:"coredns-668d6bf9bc-hb25f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fcefe358de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.679 [INFO][5710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.679 [INFO][5710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" iface="eth0" netns="" Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.679 [INFO][5710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.679 [INFO][5710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.698 [INFO][5717] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" HandleID="k8s-pod-network.f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.698 [INFO][5717] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.698 [INFO][5717] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.708 [WARNING][5717] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" HandleID="k8s-pod-network.f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.708 [INFO][5717] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" HandleID="k8s-pod-network.f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.712 [INFO][5717] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:35.716206 containerd[1830]: 2026-01-28 01:27:35.714 [INFO][5710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:35.716712 containerd[1830]: time="2026-01-28T01:27:35.716226975Z" level=info msg="TearDown network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\" successfully" Jan 28 01:27:35.716712 containerd[1830]: time="2026-01-28T01:27:35.716252895Z" level=info msg="StopPodSandbox for \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\" returns successfully" Jan 28 01:27:35.717201 containerd[1830]: time="2026-01-28T01:27:35.716910014Z" level=info msg="RemovePodSandbox for \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\"" Jan 28 01:27:35.717201 containerd[1830]: time="2026-01-28T01:27:35.716943054Z" level=info msg="Forcibly stopping sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\"" Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.750 [WARNING][5731] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02952824-c0c6-4ff4-9cb6-94e3d48d9ca2", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"8e3ba2cd2335f628184effeb778d6bbe353364c186064116675275a862cef501", Pod:"coredns-668d6bf9bc-hb25f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fcefe358de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.750 [INFO][5731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.750 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" iface="eth0" netns="" Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.750 [INFO][5731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.750 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.770 [INFO][5738] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" HandleID="k8s-pod-network.f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.771 [INFO][5738] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.771 [INFO][5738] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.779 [WARNING][5738] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" HandleID="k8s-pod-network.f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.779 [INFO][5738] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" HandleID="k8s-pod-network.f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Workload="ci--4081.3.6--n--11aaf12d54-k8s-coredns--668d6bf9bc--hb25f-eth0" Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.780 [INFO][5738] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:35.784085 containerd[1830]: 2026-01-28 01:27:35.782 [INFO][5731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756" Jan 28 01:27:35.784881 containerd[1830]: time="2026-01-28T01:27:35.784559515Z" level=info msg="TearDown network for sandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\" successfully" Jan 28 01:27:35.849616 containerd[1830]: time="2026-01-28T01:27:35.849462660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:35.849616 containerd[1830]: time="2026-01-28T01:27:35.849530420Z" level=info msg="RemovePodSandbox \"f8b2d764fbb833fed06c8e1c4183fc40730445dcb8d4799810da3ab497fbf756\" returns successfully" Jan 28 01:27:35.850118 containerd[1830]: time="2026-01-28T01:27:35.850095859Z" level=info msg="StopPodSandbox for \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\"" Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.881 [WARNING][5752] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0", GenerateName:"calico-apiserver-c96dcf9cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e049504-bcc6-4539-aec9-ba0a3a0b4d66", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c96dcf9cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c", Pod:"calico-apiserver-c96dcf9cd-wgrwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali588023ce22d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.881 [INFO][5752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.881 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" iface="eth0" netns="" Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.881 [INFO][5752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.881 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.904 [INFO][5760] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" HandleID="k8s-pod-network.815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.904 [INFO][5760] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.904 [INFO][5760] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.912 [WARNING][5760] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" HandleID="k8s-pod-network.815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.912 [INFO][5760] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" HandleID="k8s-pod-network.815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.914 [INFO][5760] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:35.917930 containerd[1830]: 2026-01-28 01:27:35.915 [INFO][5752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:35.918499 containerd[1830]: time="2026-01-28T01:27:35.917978479Z" level=info msg="TearDown network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\" successfully" Jan 28 01:27:35.918499 containerd[1830]: time="2026-01-28T01:27:35.918009119Z" level=info msg="StopPodSandbox for \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\" returns successfully" Jan 28 01:27:35.919222 containerd[1830]: time="2026-01-28T01:27:35.918857878Z" level=info msg="RemovePodSandbox for \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\"" Jan 28 01:27:35.919222 containerd[1830]: time="2026-01-28T01:27:35.918887478Z" level=info msg="Forcibly stopping sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\"" Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.954 [WARNING][5774] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0", GenerateName:"calico-apiserver-c96dcf9cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e049504-bcc6-4539-aec9-ba0a3a0b4d66", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c96dcf9cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"d183b320bb9845e45de5576198cacee171b4a0a8ecf7a485c5e8687a81a7fd2c", Pod:"calico-apiserver-c96dcf9cd-wgrwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali588023ce22d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.954 [INFO][5774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.954 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" iface="eth0" netns="" Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.954 [INFO][5774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.954 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.973 [INFO][5781] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" HandleID="k8s-pod-network.815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.973 [INFO][5781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.973 [INFO][5781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.981 [WARNING][5781] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" HandleID="k8s-pod-network.815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.981 [INFO][5781] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" HandleID="k8s-pod-network.815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--wgrwx-eth0" Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.982 [INFO][5781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:27:35.986367 containerd[1830]: 2026-01-28 01:27:35.984 [INFO][5774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1" Jan 28 01:27:35.988174 containerd[1830]: time="2026-01-28T01:27:35.987043458Z" level=info msg="TearDown network for sandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\" successfully" Jan 28 01:27:36.005529 containerd[1830]: time="2026-01-28T01:27:36.005489271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:27:36.005689 containerd[1830]: time="2026-01-28T01:27:36.005672991Z" level=info msg="RemovePodSandbox \"815b1627b98690c481f7efa346bca03f394cc3a34f0abfcf2d385d9d4b308ad1\" returns successfully" Jan 28 01:27:36.061948 containerd[1830]: time="2026-01-28T01:27:36.061902989Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:36.064883 containerd[1830]: time="2026-01-28T01:27:36.064837984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:27:36.064974 containerd[1830]: time="2026-01-28T01:27:36.064946304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:36.065394 kubelet[3322]: E0128 01:27:36.065166 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:36.065394 kubelet[3322]: E0128 01:27:36.065217 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:36.065814 kubelet[3322]: E0128 01:27:36.065430 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nlqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bsx7f_calico-system(765006a4-4116-4fc5-bb5f-ca94978ecdd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:36.066393 containerd[1830]: time="2026-01-28T01:27:36.066121143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:27:36.067119 kubelet[3322]: E0128 01:27:36.066917 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:27:36.222479 kubelet[3322]: E0128 01:27:36.222141 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:27:36.332774 containerd[1830]: time="2026-01-28T01:27:36.332683272Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:36.335772 containerd[1830]: time="2026-01-28T01:27:36.335734588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:27:36.335946 containerd[1830]: time="2026-01-28T01:27:36.335847628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:27:36.336005 kubelet[3322]: E0128 01:27:36.335965 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:36.336066 kubelet[3322]: E0128 01:27:36.336014 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:36.337794 containerd[1830]: time="2026-01-28T01:27:36.336349067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:36.337881 kubelet[3322]: E0128 01:27:36.337427 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbfgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79785b9fc9-s4g8w_calico-system(c1a44d94-a633-4913-abd2-c73f93d95c86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:36.338574 kubelet[3322]: E0128 01:27:36.338542 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:27:36.656981 containerd[1830]: time="2026-01-28T01:27:36.656806518Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:36.660157 containerd[1830]: time="2026-01-28T01:27:36.660064793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:36.660220 containerd[1830]: time="2026-01-28T01:27:36.660137513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:36.660387 kubelet[3322]: E0128 01:27:36.660346 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:36.660447 kubelet[3322]: E0128 01:27:36.660397 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:36.660559 kubelet[3322]: E0128 01:27:36.660516 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nx9wk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c96dcf9cd-km68f_calico-apiserver(059763c9-3c80-4151-8ab7-5e7bceb1fb9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:36.661663 kubelet[3322]: E0128 01:27:36.661623 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:27:36.898041 containerd[1830]: time="2026-01-28T01:27:36.896909125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:37.159065 containerd[1830]: time="2026-01-28T01:27:37.159002256Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:37.162073 containerd[1830]: time="2026-01-28T01:27:37.162033572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:37.162149 containerd[1830]: time="2026-01-28T01:27:37.162133852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:37.162784 kubelet[3322]: E0128 01:27:37.162287 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:37.162784 kubelet[3322]: E0128 01:27:37.162336 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:37.162784 kubelet[3322]: E0128 01:27:37.162458 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mcz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c96dcf9cd-wgrwx_calico-apiserver(1e049504-bcc6-4539-aec9-ba0a3a0b4d66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:37.164172 kubelet[3322]: E0128 01:27:37.163875 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:27:37.224314 kubelet[3322]: E0128 01:27:37.223612 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:27:37.224314 kubelet[3322]: E0128 01:27:37.223765 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:27:42.389373 systemd[1]: Started sshd@7-10.200.20.23:22-10.200.16.10:55512.service - OpenSSH per-connection server daemon (10.200.16.10:55512). Jan 28 01:27:42.875530 sshd[5802]: Accepted publickey for core from 10.200.16.10 port 55512 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:42.877037 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:42.882007 systemd-logind[1791]: New session 10 of user core. Jan 28 01:27:42.889372 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:27:43.295841 sshd[5802]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:43.301235 systemd-logind[1791]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:27:43.301400 systemd[1]: sshd@7-10.200.20.23:22-10.200.16.10:55512.service: Deactivated successfully. Jan 28 01:27:43.303972 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:27:43.305041 systemd-logind[1791]: Removed session 10. Jan 28 01:27:45.900478 containerd[1830]: time="2026-01-28T01:27:45.900123809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:27:45.902397 kubelet[3322]: E0128 01:27:45.902207 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54666d9c5f-8kw6q" podUID="1c968694-35fa-404a-bc2c-7a251b92bedd" Jan 28 01:27:46.175535 containerd[1830]: time="2026-01-28T01:27:46.175401278Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:46.178102 containerd[1830]: time="2026-01-28T01:27:46.177786675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:27:46.178102 containerd[1830]: time="2026-01-28T01:27:46.177844874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:27:46.179025 kubelet[3322]: E0128 01:27:46.177969 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:27:46.179025 kubelet[3322]: E0128 01:27:46.178015 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:27:46.179025 kubelet[3322]: E0128 01:27:46.178121 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t26q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:46.180882 containerd[1830]: time="2026-01-28T01:27:46.180736110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:27:46.441664 containerd[1830]: time="2026-01-28T01:27:46.441541321Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:46.446001 containerd[1830]: time="2026-01-28T01:27:46.445955194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:27:46.446082 containerd[1830]: time="2026-01-28T01:27:46.446054354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:27:46.446235 kubelet[3322]: E0128 01:27:46.446182 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:27:46.446286 kubelet[3322]: E0128 01:27:46.446239 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:27:46.449355 kubelet[3322]: E0128 01:27:46.449300 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t26q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:46.450495 kubelet[3322]: E0128 01:27:46.450456 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:47.897224 containerd[1830]: time="2026-01-28T01:27:47.897075829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:27:48.192944 containerd[1830]: time="2026-01-28T01:27:48.192787588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:48.195720 containerd[1830]: time="2026-01-28T01:27:48.195620663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:27:48.195720 containerd[1830]: time="2026-01-28T01:27:48.195690863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:48.195866 kubelet[3322]: E0128 01:27:48.195826 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:48.196207 kubelet[3322]: E0128 01:27:48.195876 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:27:48.196207 kubelet[3322]: E0128 01:27:48.195984 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nx9wk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c96dcf9cd-km68f_calico-apiserver(059763c9-3c80-4151-8ab7-5e7bceb1fb9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:48.197183 kubelet[3322]: E0128 01:27:48.197048 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:27:48.384499 systemd[1]: Started sshd@8-10.200.20.23:22-10.200.16.10:55514.service - OpenSSH per-connection server daemon (10.200.16.10:55514). Jan 28 01:27:48.871609 sshd[5842]: Accepted publickey for core from 10.200.16.10 port 55514 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:48.873010 sshd[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:48.876728 systemd-logind[1791]: New session 11 of user core. Jan 28 01:27:48.882710 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:27:48.898276 kubelet[3322]: E0128 01:27:48.898180 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:27:48.898932 containerd[1830]: time="2026-01-28T01:27:48.898883094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:27:49.189748 containerd[1830]: time="2026-01-28T01:27:49.189607820Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:49.194796 containerd[1830]: time="2026-01-28T01:27:49.194740213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:27:49.194918 containerd[1830]: time="2026-01-28T01:27:49.194854212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:27:49.196678 kubelet[3322]: E0128 01:27:49.196172 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:49.196678 kubelet[3322]: E0128 01:27:49.196218 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:27:49.196678 kubelet[3322]: E0128 01:27:49.196333 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nlqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bsx7f_calico-system(765006a4-4116-4fc5-bb5f-ca94978ecdd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:49.197957 kubelet[3322]: E0128 01:27:49.197815 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:27:49.304600 sshd[5842]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:49.308554 systemd[1]: sshd@8-10.200.20.23:22-10.200.16.10:55514.service: Deactivated successfully. Jan 28 01:27:49.312110 systemd-logind[1791]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:27:49.312650 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:27:49.313640 systemd-logind[1791]: Removed session 11. Jan 28 01:27:50.900068 containerd[1830]: time="2026-01-28T01:27:50.899423509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:27:51.200567 containerd[1830]: time="2026-01-28T01:27:51.200312940Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:51.203913 containerd[1830]: time="2026-01-28T01:27:51.203777375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:27:51.203913 containerd[1830]: time="2026-01-28T01:27:51.203881774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:27:51.204064 kubelet[3322]: E0128 01:27:51.204008 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:51.204064 kubelet[3322]: E0128 01:27:51.204053 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:27:51.204423 kubelet[3322]: E0128 01:27:51.204193 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbfgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79785b9fc9-s4g8w_calico-system(c1a44d94-a633-4913-abd2-c73f93d95c86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:51.205699 kubelet[3322]: E0128 01:27:51.205657 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:27:54.388737 systemd[1]: Started sshd@9-10.200.20.23:22-10.200.16.10:45346.service - OpenSSH per-connection server daemon (10.200.16.10:45346). Jan 28 01:27:54.883181 sshd[5860]: Accepted publickey for core from 10.200.16.10 port 45346 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:54.884346 sshd[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:54.892652 systemd-logind[1791]: New session 12 of user core. Jan 28 01:27:54.897731 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:27:55.317618 sshd[5860]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:55.323329 systemd-logind[1791]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:27:55.323787 systemd[1]: sshd@9-10.200.20.23:22-10.200.16.10:45346.service: Deactivated successfully. Jan 28 01:27:55.326728 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:27:55.329064 systemd-logind[1791]: Removed session 12. Jan 28 01:27:55.403381 systemd[1]: Started sshd@10-10.200.20.23:22-10.200.16.10:45350.service - OpenSSH per-connection server daemon (10.200.16.10:45350). Jan 28 01:27:55.890671 sshd[5882]: Accepted publickey for core from 10.200.16.10 port 45350 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:55.893390 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:55.899455 systemd-logind[1791]: New session 13 of user core. Jan 28 01:27:55.908380 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:27:56.337026 sshd[5882]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:56.341042 systemd[1]: sshd@10-10.200.20.23:22-10.200.16.10:45350.service: Deactivated successfully. Jan 28 01:27:56.343972 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:27:56.344778 systemd-logind[1791]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:27:56.347467 systemd-logind[1791]: Removed session 13. Jan 28 01:27:56.414393 systemd[1]: Started sshd@11-10.200.20.23:22-10.200.16.10:45354.service - OpenSSH per-connection server daemon (10.200.16.10:45354). Jan 28 01:27:56.861734 sshd[5894]: Accepted publickey for core from 10.200.16.10 port 45354 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:27:56.863070 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:56.867131 systemd-logind[1791]: New session 14 of user core. Jan 28 01:27:56.869409 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:27:56.902416 containerd[1830]: time="2026-01-28T01:27:56.901786194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:27:56.904044 kubelet[3322]: E0128 01:27:56.903943 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:27:57.218404 containerd[1830]: time="2026-01-28T01:27:57.217836138Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:57.220536 containerd[1830]: time="2026-01-28T01:27:57.220432294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:27:57.220536 containerd[1830]: time="2026-01-28T01:27:57.220457094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:27:57.222282 kubelet[3322]: E0128 01:27:57.222104 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:57.222282 kubelet[3322]: E0128 01:27:57.222170 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:27:57.228193 kubelet[3322]: E0128 01:27:57.228134 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a9c0213a57f46a6a663e5576055455b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc274,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54666d9c5f-8kw6q_calico-system(1c968694-35fa-404a-bc2c-7a251b92bedd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:57.230618 containerd[1830]: time="2026-01-28T01:27:57.230589199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:27:57.277369 sshd[5894]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:57.281615 systemd[1]: sshd@11-10.200.20.23:22-10.200.16.10:45354.service: Deactivated successfully. Jan 28 01:27:57.281935 systemd-logind[1791]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:27:57.284770 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:27:57.286250 systemd-logind[1791]: Removed session 14. Jan 28 01:27:57.472114 containerd[1830]: time="2026-01-28T01:27:57.471959291Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:27:57.474199 containerd[1830]: time="2026-01-28T01:27:57.474158128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:27:57.474384 containerd[1830]: time="2026-01-28T01:27:57.474184248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:27:57.474419 kubelet[3322]: E0128 01:27:57.474350 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:57.474419 kubelet[3322]: E0128 01:27:57.474395 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:27:57.474536 kubelet[3322]: E0128 01:27:57.474493 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc274,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54666d9c5f-8kw6q_calico-system(1c968694-35fa-404a-bc2c-7a251b92bedd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:27:57.476372 kubelet[3322]: E0128 01:27:57.476296 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54666d9c5f-8kw6q" podUID="1c968694-35fa-404a-bc2c-7a251b92bedd" Jan 28 01:28:00.899448 kubelet[3322]: E0128 01:28:00.898991 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:28:00.899448 kubelet[3322]: E0128 01:28:00.899083 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:28:02.363667 systemd[1]: Started sshd@12-10.200.20.23:22-10.200.16.10:43658.service - OpenSSH per-connection server daemon (10.200.16.10:43658). Jan 28 01:28:02.849969 sshd[5918]: Accepted publickey for core from 10.200.16.10 port 43658 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:02.851417 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:02.856355 systemd-logind[1791]: New session 15 of user core. Jan 28 01:28:02.862732 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:28:03.278252 sshd[5918]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:03.284429 systemd[1]: sshd@12-10.200.20.23:22-10.200.16.10:43658.service: Deactivated successfully. Jan 28 01:28:03.287777 systemd-logind[1791]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:28:03.289094 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:28:03.291951 systemd-logind[1791]: Removed session 15. Jan 28 01:28:03.899391 kubelet[3322]: E0128 01:28:03.898908 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:28:03.900264 containerd[1830]: time="2026-01-28T01:28:03.898998629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:28:04.169139 containerd[1830]: time="2026-01-28T01:28:04.168849041Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:04.171573 containerd[1830]: time="2026-01-28T01:28:04.171529797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:28:04.171649 containerd[1830]: time="2026-01-28T01:28:04.171626717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:28:04.171826 kubelet[3322]: E0128 01:28:04.171782 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:28:04.171886 kubelet[3322]: E0128 01:28:04.171837 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:28:04.172909 kubelet[3322]: E0128 01:28:04.171954 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mcz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c96dcf9cd-wgrwx_calico-apiserver(1e049504-bcc6-4539-aec9-ba0a3a0b4d66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:04.173106 kubelet[3322]: E0128 01:28:04.173083 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:28:07.901351 containerd[1830]: time="2026-01-28T01:28:07.901311309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:28:08.192315 containerd[1830]: time="2026-01-28T01:28:08.192165050Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:08.194611 containerd[1830]: time="2026-01-28T01:28:08.194574367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:28:08.194689 containerd[1830]: time="2026-01-28T01:28:08.194671406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:28:08.194852 kubelet[3322]: E0128 01:28:08.194802 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:28:08.195669 kubelet[3322]: E0128 01:28:08.194862 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:28:08.195669 kubelet[3322]: E0128 01:28:08.194978 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t26q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:08.197623 containerd[1830]: time="2026-01-28T01:28:08.197595442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:28:08.356521 systemd[1]: Started sshd@13-10.200.20.23:22-10.200.16.10:43670.service - OpenSSH per-connection server daemon (10.200.16.10:43670). Jan 28 01:28:08.486398 containerd[1830]: time="2026-01-28T01:28:08.486280467Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:08.488934 containerd[1830]: time="2026-01-28T01:28:08.488862863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:28:08.488934 containerd[1830]: time="2026-01-28T01:28:08.488907823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:28:08.489120 kubelet[3322]: E0128 01:28:08.489063 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:28:08.489197 kubelet[3322]: E0128 01:28:08.489129 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:28:08.489300 kubelet[3322]: E0128 01:28:08.489254 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t26q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:08.490566 kubelet[3322]: E0128 01:28:08.490492 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:28:08.799077 sshd[5935]: Accepted publickey for core from 10.200.16.10 port 43670 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:08.800412 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:08.804337 systemd-logind[1791]: New session 16 of user core. Jan 28 01:28:08.809399 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:28:09.257308 sshd[5935]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:09.261070 systemd[1]: sshd@13-10.200.20.23:22-10.200.16.10:43670.service: Deactivated successfully. Jan 28 01:28:09.264303 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:28:09.264476 systemd-logind[1791]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:28:09.266794 systemd-logind[1791]: Removed session 16. Jan 28 01:28:11.898249 containerd[1830]: time="2026-01-28T01:28:11.897707048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:28:12.156074 containerd[1830]: time="2026-01-28T01:28:12.155842474Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:12.158574 containerd[1830]: time="2026-01-28T01:28:12.158479551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:28:12.158574 containerd[1830]: time="2026-01-28T01:28:12.158548991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:28:12.158698 kubelet[3322]: E0128 01:28:12.158670 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:28:12.158997 kubelet[3322]: E0128 01:28:12.158714 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:28:12.158997 kubelet[3322]: E0128 01:28:12.158825 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nlqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bsx7f_calico-system(765006a4-4116-4fc5-bb5f-ca94978ecdd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:12.160233 kubelet[3322]: E0128 01:28:12.160191 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:28:12.900775 containerd[1830]: time="2026-01-28T01:28:12.900468116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:28:12.901841 kubelet[3322]: E0128 01:28:12.901595 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54666d9c5f-8kw6q" podUID="1c968694-35fa-404a-bc2c-7a251b92bedd" Jan 28 01:28:13.183846 containerd[1830]: time="2026-01-28T01:28:13.183672426Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:13.186545 containerd[1830]: time="2026-01-28T01:28:13.186447942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:28:13.186545 containerd[1830]: time="2026-01-28T01:28:13.186514742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:28:13.186747 kubelet[3322]: E0128 01:28:13.186642 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:28:13.186747 kubelet[3322]: E0128 01:28:13.186687 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:28:13.188035 kubelet[3322]: E0128 01:28:13.186805 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nx9wk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c96dcf9cd-km68f_calico-apiserver(059763c9-3c80-4151-8ab7-5e7bceb1fb9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:13.188604 kubelet[3322]: E0128 01:28:13.188227 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:28:14.305478 systemd[1]: Started sshd@14-10.200.20.23:22-10.200.16.10:35348.service - OpenSSH per-connection server daemon (10.200.16.10:35348). Jan 28 01:28:14.790568 sshd[5951]: Accepted publickey for core from 10.200.16.10 port 35348 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:14.791971 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:14.796127 systemd-logind[1791]: New session 17 of user core. Jan 28 01:28:14.805382 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:28:15.203043 sshd[5951]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:15.213417 systemd-logind[1791]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:28:15.214188 systemd[1]: sshd@14-10.200.20.23:22-10.200.16.10:35348.service: Deactivated successfully. Jan 28 01:28:15.215786 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:28:15.220257 systemd-logind[1791]: Removed session 17. Jan 28 01:28:15.293097 systemd[1]: Started sshd@15-10.200.20.23:22-10.200.16.10:35360.service - OpenSSH per-connection server daemon (10.200.16.10:35360). Jan 28 01:28:15.802395 sshd[5986]: Accepted publickey for core from 10.200.16.10 port 35360 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:15.804157 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:15.808733 systemd-logind[1791]: New session 18 of user core. Jan 28 01:28:15.814416 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:28:15.898196 kubelet[3322]: E0128 01:28:15.897966 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:28:15.899898 containerd[1830]: time="2026-01-28T01:28:15.899658772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:28:16.870531 containerd[1830]: time="2026-01-28T01:28:16.870480045Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:16.997037 containerd[1830]: time="2026-01-28T01:28:16.996974143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:28:16.998180 kubelet[3322]: E0128 01:28:16.997660 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:28:16.998180 kubelet[3322]: E0128 01:28:16.997709 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:28:16.998180 kubelet[3322]: E0128 01:28:16.997829 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbfgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79785b9fc9-s4g8w_calico-system(c1a44d94-a633-4913-abd2-c73f93d95c86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:17.000327 containerd[1830]: time="2026-01-28T01:28:16.997198462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:28:17.000373 kubelet[3322]: E0128 01:28:16.999938 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:28:17.096086 sshd[5986]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:17.099190 systemd-logind[1791]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:28:17.101952 systemd[1]: sshd@15-10.200.20.23:22-10.200.16.10:35360.service: Deactivated successfully. Jan 28 01:28:17.105859 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:28:17.108676 systemd-logind[1791]: Removed session 18. Jan 28 01:28:17.190416 systemd[1]: Started sshd@16-10.200.20.23:22-10.200.16.10:35376.service - OpenSSH per-connection server daemon (10.200.16.10:35376). Jan 28 01:28:17.700175 sshd[5998]: Accepted publickey for core from 10.200.16.10 port 35376 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:17.699780 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:17.708744 systemd-logind[1791]: New session 19 of user core. Jan 28 01:28:17.715053 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:28:18.804980 sshd[5998]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:18.814056 systemd[1]: sshd@16-10.200.20.23:22-10.200.16.10:35376.service: Deactivated successfully. Jan 28 01:28:18.815216 systemd-logind[1791]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:28:18.822795 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:28:18.827977 systemd-logind[1791]: Removed session 19. Jan 28 01:28:18.905713 systemd[1]: Started sshd@17-10.200.20.23:22-10.200.16.10:35386.service - OpenSSH per-connection server daemon (10.200.16.10:35386). Jan 28 01:28:19.400000 sshd[6027]: Accepted publickey for core from 10.200.16.10 port 35386 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:19.402713 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:19.412754 systemd-logind[1791]: New session 20 of user core. Jan 28 01:28:19.416424 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:28:19.936820 sshd[6027]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:19.940528 systemd-logind[1791]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:28:19.941413 systemd[1]: sshd@17-10.200.20.23:22-10.200.16.10:35386.service: Deactivated successfully. Jan 28 01:28:19.946785 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:28:19.953337 systemd-logind[1791]: Removed session 20. Jan 28 01:28:20.024449 systemd[1]: Started sshd@18-10.200.20.23:22-10.200.16.10:42314.service - OpenSSH per-connection server daemon (10.200.16.10:42314). Jan 28 01:28:20.524333 sshd[6039]: Accepted publickey for core from 10.200.16.10 port 42314 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:20.526005 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:20.531734 systemd-logind[1791]: New session 21 of user core. Jan 28 01:28:20.537092 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:28:20.992360 sshd[6039]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:20.998103 systemd[1]: sshd@18-10.200.20.23:22-10.200.16.10:42314.service: Deactivated successfully. Jan 28 01:28:21.002244 systemd-logind[1791]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:28:21.002324 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:28:21.007341 systemd-logind[1791]: Removed session 21. Jan 28 01:28:23.898917 kubelet[3322]: E0128 01:28:23.898781 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:28:23.899971 kubelet[3322]: E0128 01:28:23.899936 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:28:25.899419 kubelet[3322]: E0128 01:28:25.899309 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54666d9c5f-8kw6q" podUID="1c968694-35fa-404a-bc2c-7a251b92bedd" Jan 28 01:28:25.899419 kubelet[3322]: E0128 01:28:25.899380 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:28:26.086540 systemd[1]: Started sshd@19-10.200.20.23:22-10.200.16.10:42326.service - OpenSSH per-connection server daemon (10.200.16.10:42326). Jan 28 01:28:26.586244 sshd[6056]: Accepted publickey for core from 10.200.16.10 port 42326 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:26.588578 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:26.597193 systemd-logind[1791]: New session 22 of user core. Jan 28 01:28:26.619898 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:28:27.075979 sshd[6056]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:27.080757 systemd[1]: sshd@19-10.200.20.23:22-10.200.16.10:42326.service: Deactivated successfully. Jan 28 01:28:27.085380 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:28:27.086013 systemd-logind[1791]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:28:27.087442 systemd-logind[1791]: Removed session 22. Jan 28 01:28:27.896747 kubelet[3322]: E0128 01:28:27.896676 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:28:27.897180 kubelet[3322]: E0128 01:28:27.896764 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:28:32.157471 systemd[1]: Started sshd@20-10.200.20.23:22-10.200.16.10:33322.service - OpenSSH per-connection server daemon (10.200.16.10:33322). Jan 28 01:28:32.605395 sshd[6072]: Accepted publickey for core from 10.200.16.10 port 33322 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:32.607205 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:32.611100 systemd-logind[1791]: New session 23 of user core. Jan 28 01:28:32.618482 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:28:33.088620 sshd[6072]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:33.093664 systemd[1]: sshd@20-10.200.20.23:22-10.200.16.10:33322.service: Deactivated successfully. Jan 28 01:28:33.099964 systemd-logind[1791]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:28:33.102324 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:28:33.103561 systemd-logind[1791]: Removed session 23. Jan 28 01:28:36.008175 containerd[1830]: time="2026-01-28T01:28:36.008129519Z" level=info msg="StopPodSandbox for \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\"" Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.074 [WARNING][6097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"765006a4-4116-4fc5-bb5f-ca94978ecdd0", ResourceVersion:"1483", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a", Pod:"goldmane-666569f655-bsx7f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d2a3013aad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.074 [INFO][6097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.074 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" iface="eth0" netns="" Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.074 [INFO][6097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.074 [INFO][6097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.127 [INFO][6104] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" HandleID="k8s-pod-network.e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.127 [INFO][6104] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.127 [INFO][6104] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.144 [WARNING][6104] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" HandleID="k8s-pod-network.e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.144 [INFO][6104] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" HandleID="k8s-pod-network.e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.146 [INFO][6104] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:28:36.156313 containerd[1830]: 2026-01-28 01:28:36.151 [INFO][6097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:28:36.157109 containerd[1830]: time="2026-01-28T01:28:36.156337626Z" level=info msg="TearDown network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\" successfully" Jan 28 01:28:36.157109 containerd[1830]: time="2026-01-28T01:28:36.156369226Z" level=info msg="StopPodSandbox for \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\" returns successfully" Jan 28 01:28:36.157109 containerd[1830]: time="2026-01-28T01:28:36.156974265Z" level=info msg="RemovePodSandbox for \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\"" Jan 28 01:28:36.157109 containerd[1830]: time="2026-01-28T01:28:36.157004065Z" level=info msg="Forcibly stopping sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\"" Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.192 [WARNING][6118] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"765006a4-4116-4fc5-bb5f-ca94978ecdd0", ResourceVersion:"1483", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"70ac833767e2ed21776d8b247aa8c59545d77bdec65321b6579be5d427195d8a", Pod:"goldmane-666569f655-bsx7f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d2a3013aad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.192 [INFO][6118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.192 [INFO][6118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" iface="eth0" netns="" Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.192 [INFO][6118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.192 [INFO][6118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.222 [INFO][6125] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" HandleID="k8s-pod-network.e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.222 [INFO][6125] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.222 [INFO][6125] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.233 [WARNING][6125] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" HandleID="k8s-pod-network.e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.233 [INFO][6125] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" HandleID="k8s-pod-network.e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Workload="ci--4081.3.6--n--11aaf12d54-k8s-goldmane--666569f655--bsx7f-eth0" Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.234 [INFO][6125] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:28:36.241070 containerd[1830]: 2026-01-28 01:28:36.239 [INFO][6118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e" Jan 28 01:28:36.241070 containerd[1830]: time="2026-01-28T01:28:36.241011024Z" level=info msg="TearDown network for sandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\" successfully" Jan 28 01:28:36.253940 containerd[1830]: time="2026-01-28T01:28:36.253794766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:28:36.253940 containerd[1830]: time="2026-01-28T01:28:36.253855126Z" level=info msg="RemovePodSandbox \"e4f257ea6b6fbfe061dd175d8949eefe18f847162f06bd651014c366dbbfb56e\" returns successfully" Jan 28 01:28:36.255589 containerd[1830]: time="2026-01-28T01:28:36.254696485Z" level=info msg="StopPodSandbox for \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\"" Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.295 [WARNING][6139] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0", GenerateName:"calico-apiserver-c96dcf9cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"059763c9-3c80-4151-8ab7-5e7bceb1fb9d", ResourceVersion:"1471", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c96dcf9cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a", Pod:"calico-apiserver-c96dcf9cd-km68f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f83ed49f87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.296 [INFO][6139] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.296 [INFO][6139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" iface="eth0" netns="" Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.296 [INFO][6139] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.296 [INFO][6139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.315 [INFO][6147] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" HandleID="k8s-pod-network.12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.315 [INFO][6147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.316 [INFO][6147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.324 [WARNING][6147] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" HandleID="k8s-pod-network.12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.325 [INFO][6147] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" HandleID="k8s-pod-network.12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.326 [INFO][6147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:28:36.330894 containerd[1830]: 2026-01-28 01:28:36.328 [INFO][6139] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:28:36.330894 containerd[1830]: time="2026-01-28T01:28:36.330781775Z" level=info msg="TearDown network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\" successfully" Jan 28 01:28:36.330894 containerd[1830]: time="2026-01-28T01:28:36.330807975Z" level=info msg="StopPodSandbox for \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\" returns successfully" Jan 28 01:28:36.333876 containerd[1830]: time="2026-01-28T01:28:36.333598811Z" level=info msg="RemovePodSandbox for \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\"" Jan 28 01:28:36.333876 containerd[1830]: time="2026-01-28T01:28:36.333629531Z" level=info msg="Forcibly stopping sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\"" Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.376 [WARNING][6162] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0", GenerateName:"calico-apiserver-c96dcf9cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"059763c9-3c80-4151-8ab7-5e7bceb1fb9d", ResourceVersion:"1471", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c96dcf9cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"42c8d0e43bfde47ba2940fc0d02b9326b5e4eeb8f1a1d3deab46a287f59e2e1a", Pod:"calico-apiserver-c96dcf9cd-km68f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f83ed49f87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.377 [INFO][6162] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.377 [INFO][6162] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" iface="eth0" netns="" Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.377 [INFO][6162] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.377 [INFO][6162] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.399 [INFO][6170] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" HandleID="k8s-pod-network.12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.399 [INFO][6170] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.399 [INFO][6170] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.408 [WARNING][6170] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" HandleID="k8s-pod-network.12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.408 [INFO][6170] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" HandleID="k8s-pod-network.12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--apiserver--c96dcf9cd--km68f-eth0" Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.409 [INFO][6170] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:28:36.419179 containerd[1830]: 2026-01-28 01:28:36.414 [INFO][6162] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22" Jan 28 01:28:36.419179 containerd[1830]: time="2026-01-28T01:28:36.418846849Z" level=info msg="TearDown network for sandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\" successfully" Jan 28 01:28:36.425874 containerd[1830]: time="2026-01-28T01:28:36.425805319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:28:36.426073 containerd[1830]: time="2026-01-28T01:28:36.425979558Z" level=info msg="RemovePodSandbox \"12985eae601227cf1f39f71f76b71c4c0742df00b85398a51702751e6062be22\" returns successfully" Jan 28 01:28:36.426860 containerd[1830]: time="2026-01-28T01:28:36.426591158Z" level=info msg="StopPodSandbox for \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\"" Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.462 [WARNING][6184] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0", GenerateName:"calico-kube-controllers-79785b9fc9-", Namespace:"calico-system", SelfLink:"", UID:"c1a44d94-a633-4913-abd2-c73f93d95c86", ResourceVersion:"1494", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79785b9fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678", Pod:"calico-kube-controllers-79785b9fc9-s4g8w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califadd15e77a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.462 [INFO][6184] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.462 [INFO][6184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" iface="eth0" netns="" Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.462 [INFO][6184] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.462 [INFO][6184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.491 [INFO][6192] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" HandleID="k8s-pod-network.f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.491 [INFO][6192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.492 [INFO][6192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.504 [WARNING][6192] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" HandleID="k8s-pod-network.f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.504 [INFO][6192] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" HandleID="k8s-pod-network.f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.509 [INFO][6192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:28:36.518251 containerd[1830]: 2026-01-28 01:28:36.511 [INFO][6184] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:28:36.518941 containerd[1830]: time="2026-01-28T01:28:36.518729905Z" level=info msg="TearDown network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\" successfully" Jan 28 01:28:36.518941 containerd[1830]: time="2026-01-28T01:28:36.518771105Z" level=info msg="StopPodSandbox for \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\" returns successfully" Jan 28 01:28:36.520277 containerd[1830]: time="2026-01-28T01:28:36.519312064Z" level=info msg="RemovePodSandbox for \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\"" Jan 28 01:28:36.520277 containerd[1830]: time="2026-01-28T01:28:36.519343504Z" level=info msg="Forcibly stopping sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\"" Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.587 [WARNING][6206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0", GenerateName:"calico-kube-controllers-79785b9fc9-", Namespace:"calico-system", SelfLink:"", UID:"c1a44d94-a633-4913-abd2-c73f93d95c86", ResourceVersion:"1494", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79785b9fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-11aaf12d54", ContainerID:"aadf49fd5dd587c67aa99d9c38943eb5120194a8c6ea5ba29fd68d3111fdb678", Pod:"calico-kube-controllers-79785b9fc9-s4g8w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califadd15e77a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.589 [INFO][6206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.589 [INFO][6206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" iface="eth0" netns="" Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.589 [INFO][6206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.589 [INFO][6206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.639 [INFO][6213] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" HandleID="k8s-pod-network.f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.641 [INFO][6213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.641 [INFO][6213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.656 [WARNING][6213] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" HandleID="k8s-pod-network.f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.656 [INFO][6213] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" HandleID="k8s-pod-network.f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Workload="ci--4081.3.6--n--11aaf12d54-k8s-calico--kube--controllers--79785b9fc9--s4g8w-eth0" Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.658 [INFO][6213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:28:36.669548 containerd[1830]: 2026-01-28 01:28:36.667 [INFO][6206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f" Jan 28 01:28:36.671602 containerd[1830]: time="2026-01-28T01:28:36.671218646Z" level=info msg="TearDown network for sandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\" successfully" Jan 28 01:28:36.680047 containerd[1830]: time="2026-01-28T01:28:36.679885153Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:28:36.680047 containerd[1830]: time="2026-01-28T01:28:36.679955353Z" level=info msg="RemovePodSandbox \"f65e4128ebefa1c7741f82f18ae27b81300ae6ce72c9fbe7ecf9c296a3b2785f\" returns successfully" Jan 28 01:28:36.899767 kubelet[3322]: E0128 01:28:36.898122 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:28:38.171410 systemd[1]: Started sshd@21-10.200.20.23:22-10.200.16.10:33326.service - OpenSSH per-connection server daemon (10.200.16.10:33326). Jan 28 01:28:38.661635 sshd[6225]: Accepted publickey for core from 10.200.16.10 port 33326 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:38.662517 sshd[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:38.666581 systemd-logind[1791]: New session 24 of user core. Jan 28 01:28:38.673525 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:28:38.897890 kubelet[3322]: E0128 01:28:38.897851 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:28:38.901162 kubelet[3322]: E0128 01:28:38.898941 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:28:38.901162 kubelet[3322]: E0128 01:28:38.899763 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:28:39.086338 sshd[6225]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:39.090357 systemd-logind[1791]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:28:39.091022 systemd[1]: sshd@21-10.200.20.23:22-10.200.16.10:33326.service: Deactivated successfully. Jan 28 01:28:39.093419 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:28:39.095611 systemd-logind[1791]: Removed session 24. Jan 28 01:28:39.897396 kubelet[3322]: E0128 01:28:39.897342 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:28:40.900920 containerd[1830]: time="2026-01-28T01:28:40.900876883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:28:41.162666 containerd[1830]: time="2026-01-28T01:28:41.162523627Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:41.165291 containerd[1830]: time="2026-01-28T01:28:41.165231263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:28:41.165430 containerd[1830]: time="2026-01-28T01:28:41.165342343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:28:41.166075 kubelet[3322]: E0128 01:28:41.165521 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:28:41.166075 kubelet[3322]: E0128 01:28:41.165573 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:28:41.166075 kubelet[3322]: E0128 01:28:41.165698 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a9c0213a57f46a6a663e5576055455b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc274,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54666d9c5f-8kw6q_calico-system(1c968694-35fa-404a-bc2c-7a251b92bedd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:41.170156 containerd[1830]: time="2026-01-28T01:28:41.167871859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:28:41.456135 containerd[1830]: time="2026-01-28T01:28:41.456008005Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:41.458954 containerd[1830]: time="2026-01-28T01:28:41.458873441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:28:41.459079 containerd[1830]: time="2026-01-28T01:28:41.458985640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:28:41.459164 kubelet[3322]: E0128 01:28:41.459115 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:28:41.459228 kubelet[3322]: E0128 01:28:41.459181 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:28:41.459318 kubelet[3322]: E0128 01:28:41.459280 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc274,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54666d9c5f-8kw6q_calico-system(1c968694-35fa-404a-bc2c-7a251b92bedd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:41.460743 kubelet[3322]: E0128 01:28:41.460701 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54666d9c5f-8kw6q" podUID="1c968694-35fa-404a-bc2c-7a251b92bedd" Jan 28 01:28:44.172426 systemd[1]: Started sshd@22-10.200.20.23:22-10.200.16.10:45364.service - OpenSSH per-connection server daemon (10.200.16.10:45364). Jan 28 01:28:44.663048 sshd[6238]: Accepted publickey for core from 10.200.16.10 port 45364 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:44.664584 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:44.674237 systemd-logind[1791]: New session 25 of user core. Jan 28 01:28:44.682430 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:28:45.076897 sshd[6238]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:45.079556 systemd[1]: sshd@22-10.200.20.23:22-10.200.16.10:45364.service: Deactivated successfully. Jan 28 01:28:45.083276 systemd-logind[1791]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:28:45.083982 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:28:45.087134 systemd-logind[1791]: Removed session 25. Jan 28 01:28:49.898124 kubelet[3322]: E0128 01:28:49.897276 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:28:49.898124 kubelet[3322]: E0128 01:28:49.897379 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d" Jan 28 01:28:50.178379 systemd[1]: Started sshd@23-10.200.20.23:22-10.200.16.10:44750.service - OpenSSH per-connection server daemon (10.200.16.10:44750). Jan 28 01:28:50.674232 sshd[6278]: Accepted publickey for core from 10.200.16.10 port 44750 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:50.674752 sshd[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:50.680139 systemd-logind[1791]: New session 26 of user core. Jan 28 01:28:50.684391 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:28:51.107321 sshd[6278]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:51.110815 systemd-logind[1791]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:28:51.112733 systemd[1]: sshd@23-10.200.20.23:22-10.200.16.10:44750.service: Deactivated successfully. Jan 28 01:28:51.117002 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:28:51.118387 systemd-logind[1791]: Removed session 26. Jan 28 01:28:51.898498 containerd[1830]: time="2026-01-28T01:28:51.898447432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:28:52.158800 containerd[1830]: time="2026-01-28T01:28:52.158564339Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:52.161294 containerd[1830]: time="2026-01-28T01:28:52.161206615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:28:52.162851 containerd[1830]: time="2026-01-28T01:28:52.161410135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:28:52.162934 kubelet[3322]: E0128 01:28:52.161562 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:28:52.162934 kubelet[3322]: E0128 01:28:52.161607 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:28:52.162934 kubelet[3322]: E0128 01:28:52.161718 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8mcz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c96dcf9cd-wgrwx_calico-apiserver(1e049504-bcc6-4539-aec9-ba0a3a0b4d66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:52.163513 kubelet[3322]: E0128 01:28:52.163453 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-wgrwx" podUID="1e049504-bcc6-4539-aec9-ba0a3a0b4d66" Jan 28 01:28:52.898740 containerd[1830]: time="2026-01-28T01:28:52.898438677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:28:53.151583 containerd[1830]: time="2026-01-28T01:28:53.150828714Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:53.153886 containerd[1830]: time="2026-01-28T01:28:53.153651990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:28:53.153886 containerd[1830]: time="2026-01-28T01:28:53.153751550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:28:53.154182 kubelet[3322]: E0128 01:28:53.154127 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:28:53.154236 kubelet[3322]: E0128 01:28:53.154194 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:28:53.155040 kubelet[3322]: E0128 01:28:53.154975 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t26q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:53.155629 containerd[1830]: time="2026-01-28T01:28:53.155399348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:28:53.424648 containerd[1830]: time="2026-01-28T01:28:53.424388881Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:53.427052 containerd[1830]: time="2026-01-28T01:28:53.426954238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:28:53.427281 containerd[1830]: time="2026-01-28T01:28:53.427192357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:28:53.427362 kubelet[3322]: E0128 01:28:53.427310 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:28:53.427664 kubelet[3322]: E0128 01:28:53.427368 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:28:53.427664 kubelet[3322]: E0128 01:28:53.427603 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nlqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bsx7f_calico-system(765006a4-4116-4fc5-bb5f-ca94978ecdd0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:53.428115 containerd[1830]: time="2026-01-28T01:28:53.428056076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:28:53.429731 kubelet[3322]: E0128 01:28:53.429482 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bsx7f" podUID="765006a4-4116-4fc5-bb5f-ca94978ecdd0" Jan 28 01:28:53.719670 containerd[1830]: time="2026-01-28T01:28:53.719523778Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:28:53.724159 containerd[1830]: time="2026-01-28T01:28:53.723342812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:28:53.724159 containerd[1830]: time="2026-01-28T01:28:53.723457092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:28:53.724455 kubelet[3322]: E0128 01:28:53.724407 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:28:53.724519 kubelet[3322]: E0128 01:28:53.724464 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:28:53.726217 kubelet[3322]: E0128 01:28:53.724578 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t26q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cvf4_calico-system(0a3f2b82-9dfb-45f4-8480-07421e1f39e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:28:53.726217 kubelet[3322]: E0128 01:28:53.726171 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cvf4" podUID="0a3f2b82-9dfb-45f4-8480-07421e1f39e6" Jan 28 01:28:54.900668 kubelet[3322]: E0128 01:28:54.900619 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54666d9c5f-8kw6q" podUID="1c968694-35fa-404a-bc2c-7a251b92bedd" Jan 28 01:28:56.186370 systemd[1]: Started sshd@24-10.200.20.23:22-10.200.16.10:44760.service - OpenSSH per-connection server daemon (10.200.16.10:44760). Jan 28 01:28:56.631174 sshd[6309]: Accepted publickey for core from 10.200.16.10 port 44760 ssh2: RSA SHA256:mt/Wq3KIKwIer9YIq1LQuVz4zsibJKOQxGgoJKvjdGI Jan 28 01:28:56.634123 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:56.638749 systemd-logind[1791]: New session 27 of user core. Jan 28 01:28:56.642427 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:28:57.068085 sshd[6309]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:57.077011 systemd[1]: sshd@24-10.200.20.23:22-10.200.16.10:44760.service: Deactivated successfully. Jan 28 01:28:57.082302 systemd-logind[1791]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:28:57.083369 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:28:57.084647 systemd-logind[1791]: Removed session 27. Jan 28 01:29:00.900936 containerd[1830]: time="2026-01-28T01:29:00.899291912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:29:01.240849 containerd[1830]: time="2026-01-28T01:29:01.240599690Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:29:01.243675 containerd[1830]: time="2026-01-28T01:29:01.243571166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:29:01.243675 containerd[1830]: time="2026-01-28T01:29:01.243611686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:29:01.243848 kubelet[3322]: E0128 01:29:01.243773 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:29:01.243848 kubelet[3322]: E0128 01:29:01.243819 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:29:01.245223 kubelet[3322]: E0128 01:29:01.243956 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbfgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79785b9fc9-s4g8w_calico-system(c1a44d94-a633-4913-abd2-c73f93d95c86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:01.245223 kubelet[3322]: E0128 01:29:01.245107 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79785b9fc9-s4g8w" podUID="c1a44d94-a633-4913-abd2-c73f93d95c86" Jan 28 01:29:02.901039 containerd[1830]: time="2026-01-28T01:29:02.900991440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:29:03.176367 containerd[1830]: time="2026-01-28T01:29:03.176218107Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:29:03.178844 containerd[1830]: time="2026-01-28T01:29:03.178787623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:29:03.178960 containerd[1830]: time="2026-01-28T01:29:03.178893543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:29:03.179404 kubelet[3322]: E0128 01:29:03.179125 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:29:03.179404 kubelet[3322]: E0128 01:29:03.179232 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:29:03.179927 kubelet[3322]: E0128 01:29:03.179379 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nx9wk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c96dcf9cd-km68f_calico-apiserver(059763c9-3c80-4151-8ab7-5e7bceb1fb9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:29:03.180646 kubelet[3322]: E0128 01:29:03.180606 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c96dcf9cd-km68f" podUID="059763c9-3c80-4151-8ab7-5e7bceb1fb9d"