Jan 23 23:50:41.196866 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:50:41.196888 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:50:41.196896 kernel: KASLR enabled Jan 23 23:50:41.196902 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:50:41.196909 kernel: printk: bootconsole [pl11] enabled Jan 23 23:50:41.196915 kernel: efi: EFI v2.7 by EDK II Jan 23 23:50:41.196922 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:50:41.196928 kernel: random: crng init done Jan 23 23:50:41.196934 kernel: ACPI: Early table checksum verification disabled Jan 23 23:50:41.196940 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:50:41.196946 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196952 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196960 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:50:41.196966 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196973 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196980 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196986 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196995 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.197001 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.197008 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:50:41.197014 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.197021 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:50:41.197027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:50:41.197034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:50:41.197040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:50:41.197047 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:50:41.197054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:50:41.197060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:50:41.197069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:50:41.197075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:50:41.197082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:50:41.197088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:50:41.197095 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:50:41.197101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:50:41.197108 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 23 23:50:41.197114 kernel: Zone ranges: Jan 23 23:50:41.197121 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:50:41.197127 kernel: DMA32 empty Jan 23 23:50:41.197133 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:50:41.197140 kernel: Movable zone start for each node Jan 23 23:50:41.197150 kernel: Early memory node ranges Jan 23 23:50:41.197157 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:50:41.197164 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:50:41.197171 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:50:41.197178 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:50:41.197187 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:50:41.197193 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:50:41.197200 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:50:41.197207 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:50:41.197214 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:50:41.197221 kernel: psci: probing for conduit method from ACPI. Jan 23 23:50:41.197228 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:50:41.197234 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:50:41.197241 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:50:41.197248 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:50:41.197255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:50:41.197261 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:50:41.197270 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:50:41.197277 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:50:41.197284 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:50:41.197290 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:50:41.197297 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:50:41.197304 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:50:41.197310 kernel: CPU features: detected: Spectre-BHB Jan 23 23:50:41.197317 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:50:41.197324 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:50:41.197331 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:50:41.197338 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:50:41.197346 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:50:41.197353 kernel: alternatives: applying boot alternatives Jan 23 23:50:41.197361 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:50:41.197369 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:50:41.197376 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:50:41.197382 kernel: Fallback order for Node 0: 0 Jan 23 23:50:41.197389 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:50:41.197396 kernel: Policy zone: Normal Jan 23 23:50:41.197403 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:50:41.197410 kernel: software IO TLB: area num 2. Jan 23 23:50:41.197416 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:50:41.197425 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 23 23:50:41.197432 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:50:41.197438 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:50:41.197446 kernel: rcu: RCU event tracing is enabled. Jan 23 23:50:41.197453 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:50:41.197460 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:50:41.197467 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:50:41.197473 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:50:41.197480 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:50:41.197487 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:50:41.197494 kernel: GICv3: 960 SPIs implemented Jan 23 23:50:41.197502 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:50:41.197508 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:50:41.197515 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:50:41.197522 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:50:41.197528 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:50:41.197535 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:50:41.197542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:50:41.197549 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:50:41.197556 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:50:41.197563 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:50:41.197570 kernel: Console: colour dummy device 80x25 Jan 23 23:50:41.197578 kernel: printk: console [tty1] enabled Jan 23 23:50:41.197586 kernel: ACPI: Core revision 20230628 Jan 23 23:50:41.197593 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:50:41.197600 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:50:41.197607 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:50:41.197614 kernel: landlock: Up and running. Jan 23 23:50:41.197621 kernel: SELinux: Initializing. Jan 23 23:50:41.197628 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:50:41.197635 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:50:41.197643 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:50:41.197651 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:50:41.197658 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:50:41.197665 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:50:41.197680 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:50:41.197688 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:50:41.197695 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:50:41.197702 kernel: Remapping and enabling EFI services. Jan 23 23:50:41.197716 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:50:41.197724 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:50:41.197731 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:50:41.197738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:50:41.197747 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:50:41.197755 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:50:41.197762 kernel: SMP: Total of 2 processors activated. Jan 23 23:50:41.197770 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:50:41.197777 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:50:41.197786 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:50:41.197793 kernel: CPU features: detected: CRC32 instructions Jan 23 23:50:41.197801 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:50:41.197808 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:50:41.197816 kernel: CPU features: detected: Privileged Access Never Jan 23 23:50:41.197823 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:50:41.197830 kernel: alternatives: applying system-wide alternatives Jan 23 23:50:41.197837 kernel: devtmpfs: initialized Jan 23 23:50:41.197845 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:50:41.197854 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:50:41.197861 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:50:41.197869 kernel: SMBIOS 3.1.0 present. Jan 23 23:50:41.197876 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:50:41.197884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:50:41.197891 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:50:41.197899 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:50:41.197906 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:50:41.197914 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:50:41.197922 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:50:41.197930 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:50:41.197937 kernel: cpuidle: using governor menu Jan 23 23:50:41.197944 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:50:41.197952 kernel: ASID allocator initialised with 32768 entries Jan 23 23:50:41.197959 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:50:41.197967 kernel: Serial: AMBA PL011 UART driver Jan 23 23:50:41.197974 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:50:41.197982 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:50:41.197991 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:50:41.197998 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:50:41.198005 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:50:41.198013 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:50:41.198020 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:50:41.198028 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:50:41.198035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:50:41.198043 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:50:41.198050 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:50:41.198059 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:50:41.198066 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:50:41.198073 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:50:41.198081 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:50:41.198088 kernel: ACPI: Interpreter enabled Jan 23 23:50:41.198096 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:50:41.198103 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:50:41.198111 kernel: printk: console [ttyAMA0] enabled Jan 23 23:50:41.198118 kernel: printk: bootconsole [pl11] disabled Jan 23 23:50:41.198127 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:50:41.198134 kernel: iommu: Default domain type: Translated Jan 23 23:50:41.198142 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:50:41.198149 kernel: efivars: Registered efivars operations Jan 23 23:50:41.198156 kernel: vgaarb: loaded Jan 23 23:50:41.198164 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:50:41.198171 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:50:41.198178 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:50:41.198186 kernel: pnp: PnP ACPI init Jan 23 23:50:41.198195 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:50:41.198202 kernel: NET: Registered PF_INET protocol family Jan 23 23:50:41.198209 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:50:41.198217 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:50:41.198224 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:50:41.198232 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:50:41.198239 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:50:41.198247 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:50:41.198254 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:50:41.198263 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:50:41.198271 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:50:41.198278 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:50:41.198285 kernel: kvm [1]: HYP mode not available Jan 23 23:50:41.198293 kernel: Initialise system trusted keyrings Jan 23 23:50:41.198300 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:50:41.198307 kernel: Key type asymmetric registered Jan 23 23:50:41.198314 kernel: Asymmetric key parser 'x509' registered Jan 23 23:50:41.198322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:50:41.198331 kernel: io scheduler mq-deadline registered Jan 23 23:50:41.198338 kernel: io scheduler kyber registered Jan 23 23:50:41.198345 kernel: io scheduler bfq registered Jan 23 23:50:41.198353 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:50:41.198360 kernel: thunder_xcv, ver 1.0 Jan 23 23:50:41.198367 kernel: thunder_bgx, ver 1.0 Jan 23 23:50:41.198375 kernel: nicpf, ver 1.0 Jan 23 23:50:41.198382 kernel: nicvf, ver 1.0 Jan 23 23:50:41.198510 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:50:41.198583 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:50:40 UTC (1769212240) Jan 23 23:50:41.198593 kernel: efifb: probing for efifb Jan 23 23:50:41.198601 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:50:41.198609 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:50:41.198616 kernel: efifb: scrolling: redraw Jan 23 23:50:41.198623 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:50:41.198631 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:50:41.198638 kernel: fb0: EFI VGA frame buffer device Jan 23 23:50:41.198647 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:50:41.198654 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:50:41.198662 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:50:41.198670 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:50:41.198684 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:50:41.198691 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:50:41.198699 kernel: Segment Routing with IPv6 Jan 23 23:50:41.198706 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:50:41.198714 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:50:41.198723 kernel: Key type dns_resolver registered Jan 23 23:50:41.198730 kernel: registered taskstats version 1 Jan 23 23:50:41.198737 kernel: Loading compiled-in X.509 certificates Jan 23 23:50:41.198745 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:50:41.198752 kernel: Key type .fscrypt registered Jan 23 23:50:41.198759 kernel: Key type fscrypt-provisioning registered Jan 23 23:50:41.198766 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:50:41.198774 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:50:41.198781 kernel: ima: No architecture policies found Jan 23 23:50:41.198790 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:50:41.198798 kernel: clk: Disabling unused clocks Jan 23 23:50:41.198805 kernel: Freeing unused kernel memory: 39424K Jan 23 23:50:41.198812 kernel: Run /init as init process Jan 23 23:50:41.198819 kernel: with arguments: Jan 23 23:50:41.198826 kernel: /init Jan 23 23:50:41.198834 kernel: with environment: Jan 23 23:50:41.198841 kernel: HOME=/ Jan 23 23:50:41.198848 kernel: TERM=linux Jan 23 23:50:41.198857 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:50:41.198868 systemd[1]: Detected virtualization microsoft. Jan 23 23:50:41.198876 systemd[1]: Detected architecture arm64. Jan 23 23:50:41.198883 systemd[1]: Running in initrd. Jan 23 23:50:41.198891 systemd[1]: No hostname configured, using default hostname. Jan 23 23:50:41.198899 systemd[1]: Hostname set to . Jan 23 23:50:41.198907 systemd[1]: Initializing machine ID from random generator. Jan 23 23:50:41.198917 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:50:41.198925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:50:41.198933 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:50:41.198941 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:50:41.198950 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:50:41.198958 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:50:41.198966 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:50:41.198975 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:50:41.198985 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:50:41.198993 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:50:41.199001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:50:41.199009 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:50:41.199017 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:50:41.199024 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:50:41.199032 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:50:41.199040 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:50:41.199049 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:50:41.199057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:50:41.199065 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:50:41.199073 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:50:41.199081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:50:41.199089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:50:41.199097 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:50:41.199105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:50:41.199114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:50:41.199122 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:50:41.199130 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:50:41.199137 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:50:41.199145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:50:41.199167 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:50:41.199189 systemd-journald[217]: Journal started Jan 23 23:50:41.199207 systemd-journald[217]: Runtime Journal (/run/log/journal/924e5c1b3f064108bdd3ea72997672ea) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:50:41.211127 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:50:41.219980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:50:41.235691 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:50:41.243556 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:50:41.243605 kernel: Bridge firewalling registered Jan 23 23:50:41.243688 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:50:41.252067 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:50:41.267704 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:50:41.273582 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:50:41.282065 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:50:41.291095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:41.308916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:50:41.316856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:50:41.328852 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:50:41.349178 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:50:41.361491 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:50:41.369143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:50:41.376577 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:50:41.387022 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:50:41.409969 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:50:41.423803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:50:41.437548 dracut-cmdline[252]: dracut-dracut-053 Jan 23 23:50:41.438929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:50:41.465453 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:50:41.452855 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:50:41.506413 systemd-resolved[257]: Positive Trust Anchors: Jan 23 23:50:41.506428 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:50:41.506459 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:50:41.512220 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 23 23:50:41.513078 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:50:41.525345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:50:41.577684 kernel: SCSI subsystem initialized Jan 23 23:50:41.584684 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:50:41.594689 kernel: iscsi: registered transport (tcp) Jan 23 23:50:41.612221 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:50:41.612282 kernel: QLogic iSCSI HBA Driver Jan 23 23:50:41.645391 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:50:41.655932 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:50:41.690250 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:50:41.690327 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:50:41.695193 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:50:41.743700 kernel: raid6: neonx8 gen() 15780 MB/s Jan 23 23:50:41.762691 kernel: raid6: neonx4 gen() 15701 MB/s Jan 23 23:50:41.781693 kernel: raid6: neonx2 gen() 13311 MB/s Jan 23 23:50:41.801683 kernel: raid6: neonx1 gen() 10533 MB/s Jan 23 23:50:41.820680 kernel: raid6: int64x8 gen() 6979 MB/s Jan 23 23:50:41.839680 kernel: raid6: int64x4 gen() 7362 MB/s Jan 23 23:50:41.859684 kernel: raid6: int64x2 gen() 6145 MB/s Jan 23 23:50:41.881623 kernel: raid6: int64x1 gen() 5069 MB/s Jan 23 23:50:41.881642 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Jan 23 23:50:41.904387 kernel: raid6: .... xor() 12038 MB/s, rmw enabled Jan 23 23:50:41.904399 kernel: raid6: using neon recovery algorithm Jan 23 23:50:41.915110 kernel: xor: measuring software checksum speed Jan 23 23:50:41.915136 kernel: 8regs : 19807 MB/sec Jan 23 23:50:41.918038 kernel: 32regs : 19650 MB/sec Jan 23 23:50:41.920902 kernel: arm64_neon : 27132 MB/sec Jan 23 23:50:41.924203 kernel: xor: using function: arm64_neon (27132 MB/sec) Jan 23 23:50:41.974819 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:50:41.985244 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:50:41.998834 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:50:42.019102 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 23 23:50:42.023929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:50:42.042806 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:50:42.059452 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 23 23:50:42.093334 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:50:42.106121 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:50:42.144019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:50:42.160913 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:50:42.180218 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:50:42.188092 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:50:42.203705 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:50:42.217729 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:50:42.243864 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:50:42.273800 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:50:42.275530 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:50:42.290874 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:50:42.323670 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:50:42.323705 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:50:42.323715 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:50:42.323725 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 23:50:42.291098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:50:42.308994 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:50:42.364324 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:50:42.364344 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:50:42.364354 kernel: scsi host0: storvsc_host_t Jan 23 23:50:42.364499 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:50:42.364509 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:50:42.329764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:50:42.399218 kernel: scsi host1: storvsc_host_t Jan 23 23:50:42.399424 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 23:50:42.399438 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:50:42.399537 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:50:42.329971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:42.352772 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:50:42.412120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:50:42.425837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:50:42.425961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:42.458393 kernel: PTP clock support registered Jan 23 23:50:42.458416 kernel: hv_netvsc 7ced8dd2-f270-7ced-8dd2-f2707ced8dd2 eth0: VF slot 1 added Jan 23 23:50:42.442071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:50:42.476926 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:50:42.476982 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:50:42.477169 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:50:42.488041 kernel: hv_pci 8ce285e7-9473-4ff8-925b-191df071bdab: PCI VMBus probing: Using version 0x10004 Jan 23 23:50:42.489212 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:50:42.490520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:42.521930 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:50:42.521953 kernel: hv_pci 8ce285e7-9473-4ff8-925b-191df071bdab: PCI host bridge to bus 9473:00 Jan 23 23:50:42.522113 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:50:42.522124 kernel: pci_bus 9473:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:50:42.522225 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:50:42.522235 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:50:42.522244 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:50:42.935342 systemd-resolved[257]: Clock change detected. Flushing caches. Jan 23 23:50:42.953130 kernel: pci_bus 9473:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:50:42.958358 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:50:42.958560 kernel: pci 9473:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:50:42.953778 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:50:42.991182 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:50:42.991354 kernel: pci 9473:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:50:42.991375 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:50:42.991467 kernel: pci 9473:00:02.0: enabling Extended Tags Jan 23 23:50:42.991482 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:50:42.999936 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:50:43.008145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#277 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:50:43.008383 kernel: pci 9473:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9473:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:50:43.017356 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:50:43.033165 kernel: pci_bus 9473:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:50:43.042263 kernel: pci 9473:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:50:43.042488 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:43.046784 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:50:43.067956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#276 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:50:43.092012 kernel: mlx5_core 9473:00:02.0: enabling device (0000 -> 0002) Jan 23 23:50:43.097867 kernel: mlx5_core 9473:00:02.0: firmware version: 16.30.5026 Jan 23 23:50:43.292646 kernel: hv_netvsc 7ced8dd2-f270-7ced-8dd2-f2707ced8dd2 eth0: VF registering: eth1 Jan 23 23:50:43.292866 kernel: mlx5_core 9473:00:02.0 eth1: joined to eth0 Jan 23 23:50:43.298003 kernel: mlx5_core 9473:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:50:43.306874 kernel: mlx5_core 9473:00:02.0 enP38003s1: renamed from eth1 Jan 23 23:50:43.610280 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (486) Jan 23 23:50:43.618502 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:50:43.632009 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (497) Jan 23 23:50:43.638840 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:50:43.644167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:50:43.670319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:50:43.685853 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:50:43.704109 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:50:43.728883 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:43.736877 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:43.745904 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:44.747534 disk-uuid[608]: The operation has completed successfully. Jan 23 23:50:44.752120 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:44.821573 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:50:44.822882 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:50:44.846002 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:50:44.855851 sh[721]: Success Jan 23 23:50:44.884887 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:50:45.154332 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:50:45.162998 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:50:45.168881 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:50:45.205758 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:50:45.205813 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:50:45.211530 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:50:45.216543 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:50:45.220190 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:50:45.678647 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:50:45.683161 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:50:45.704155 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:50:45.711036 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:50:45.750127 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:45.750188 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:50:45.754247 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:50:45.801881 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:50:45.811529 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:50:45.821016 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:45.828254 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:50:45.846055 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:50:45.851284 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:50:45.858042 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:50:45.883764 systemd-networkd[902]: lo: Link UP Jan 23 23:50:45.883774 systemd-networkd[902]: lo: Gained carrier Jan 23 23:50:45.885355 systemd-networkd[902]: Enumeration completed Jan 23 23:50:45.887876 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:50:45.888312 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:50:45.888316 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:50:45.892767 systemd[1]: Reached target network.target - Network. Jan 23 23:50:45.969878 kernel: mlx5_core 9473:00:02.0 enP38003s1: Link up Jan 23 23:50:46.006951 kernel: hv_netvsc 7ced8dd2-f270-7ced-8dd2-f2707ced8dd2 eth0: Data path switched to VF: enP38003s1 Jan 23 23:50:46.007592 systemd-networkd[902]: enP38003s1: Link UP Jan 23 23:50:46.007821 systemd-networkd[902]: eth0: Link UP Jan 23 23:50:46.008212 systemd-networkd[902]: eth0: Gained carrier Jan 23 23:50:46.008222 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:50:46.015355 systemd-networkd[902]: enP38003s1: Gained carrier Jan 23 23:50:46.032898 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:50:46.849739 ignition[905]: Ignition 2.19.0 Jan 23 23:50:46.849754 ignition[905]: Stage: fetch-offline Jan 23 23:50:46.853302 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:50:46.849791 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:46.849800 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:46.849929 ignition[905]: parsed url from cmdline: "" Jan 23 23:50:46.849933 ignition[905]: no config URL provided Jan 23 23:50:46.849938 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:50:46.875094 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:50:46.849947 ignition[905]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:50:46.849952 ignition[905]: failed to fetch config: resource requires networking Jan 23 23:50:46.850162 ignition[905]: Ignition finished successfully Jan 23 23:50:46.897271 ignition[921]: Ignition 2.19.0 Jan 23 23:50:46.897277 ignition[921]: Stage: fetch Jan 23 23:50:46.897446 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:46.897456 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:46.897547 ignition[921]: parsed url from cmdline: "" Jan 23 23:50:46.897550 ignition[921]: no config URL provided Jan 23 23:50:46.897555 ignition[921]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:50:46.897561 ignition[921]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:50:46.897583 ignition[921]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:50:46.980377 ignition[921]: GET result: OK Jan 23 23:50:46.980449 ignition[921]: config has been read from IMDS userdata Jan 23 23:50:46.980493 ignition[921]: parsing config with SHA512: cd3503c7e3f668c33ff1a5057a6d1b23ed5f8e006609a820cc3211b58a472f16bc80a65e94b7a8aa145ac09aff5816ae7c575c4987589f3ce8814bc6e08eed4c Jan 23 23:50:46.986922 unknown[921]: fetched base config from "system" Jan 23 23:50:46.987315 ignition[921]: fetch: fetch complete Jan 23 23:50:46.986929 unknown[921]: fetched base config from "system" Jan 23 23:50:46.987319 ignition[921]: fetch: fetch passed Jan 23 23:50:46.986934 unknown[921]: fetched user config from "azure" Jan 23 23:50:46.987361 ignition[921]: Ignition finished successfully Jan 23 23:50:46.989116 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:50:47.002098 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:50:47.026715 ignition[927]: Ignition 2.19.0 Jan 23 23:50:47.026721 ignition[927]: Stage: kargs Jan 23 23:50:47.033805 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:50:47.026961 ignition[927]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:47.026971 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:47.028447 ignition[927]: kargs: kargs passed Jan 23 23:50:47.028503 ignition[927]: Ignition finished successfully Jan 23 23:50:47.060418 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:50:47.073548 ignition[933]: Ignition 2.19.0 Jan 23 23:50:47.073560 ignition[933]: Stage: disks Jan 23 23:50:47.077727 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:50:47.073731 ignition[933]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:47.083890 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:50:47.073741 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:47.092494 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:50:47.074689 ignition[933]: disks: disks passed Jan 23 23:50:47.101331 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:50:47.074738 ignition[933]: Ignition finished successfully Jan 23 23:50:47.110247 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:50:47.120022 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:50:47.145088 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:50:47.214908 systemd-fsck[942]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:50:47.221345 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:50:47.236112 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:50:47.292896 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:50:47.293503 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:50:47.297482 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:50:47.323982 systemd-networkd[902]: eth0: Gained IPv6LL Jan 23 23:50:47.341997 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:50:47.364892 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Jan 23 23:50:47.375640 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:47.375700 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:50:47.379369 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:50:47.381072 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:50:47.392047 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:50:47.407959 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:50:47.406973 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:50:47.407017 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:50:47.414692 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:50:47.426618 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:50:47.447594 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:50:47.896930 coreos-metadata[969]: Jan 23 23:50:47.896 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:50:47.905362 coreos-metadata[969]: Jan 23 23:50:47.905 INFO Fetch successful Jan 23 23:50:47.905362 coreos-metadata[969]: Jan 23 23:50:47.905 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:50:47.920555 coreos-metadata[969]: Jan 23 23:50:47.920 INFO Fetch successful Jan 23 23:50:47.936735 coreos-metadata[969]: Jan 23 23:50:47.936 INFO wrote hostname ci-4081.3.6-n-73953443dc to /sysroot/etc/hostname Jan 23 23:50:47.939601 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:50:48.122616 initrd-setup-root[983]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:50:48.146147 initrd-setup-root[990]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:50:48.168629 initrd-setup-root[997]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:50:48.174482 initrd-setup-root[1004]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:50:49.600393 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:50:49.612078 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:50:49.619045 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:50:49.628827 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:50:49.642176 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:49.666091 ignition[1071]: INFO : Ignition 2.19.0 Jan 23 23:50:49.666091 ignition[1071]: INFO : Stage: mount Jan 23 23:50:49.675206 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:49.675206 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:49.675206 ignition[1071]: INFO : mount: mount passed Jan 23 23:50:49.675206 ignition[1071]: INFO : Ignition finished successfully Jan 23 23:50:49.674319 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:50:49.680259 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:50:49.700976 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:50:49.719066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:50:49.750873 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1084) Jan 23 23:50:49.761971 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:49.762009 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:50:49.765838 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:50:49.773967 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:50:49.775383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:50:49.801904 ignition[1102]: INFO : Ignition 2.19.0 Jan 23 23:50:49.801904 ignition[1102]: INFO : Stage: files Jan 23 23:50:49.801904 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:49.801904 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:49.817845 ignition[1102]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:50:49.823055 ignition[1102]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:50:49.823055 ignition[1102]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:50:49.908880 ignition[1102]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:50:49.914950 ignition[1102]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:50:49.914950 ignition[1102]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:50:49.909338 unknown[1102]: wrote ssh authorized keys file for user: core Jan 23 23:50:49.930965 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:50:49.930965 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:50:49.930965 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:50:49.930965 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:50:49.968700 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:50:50.151253 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:50:53.939044 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 23:50:54.211092 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:50:54.211092 ignition[1102]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: files passed Jan 23 23:50:54.226942 ignition[1102]: INFO : Ignition finished successfully Jan 23 23:50:54.222155 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:50:54.271803 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:50:54.285067 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:50:54.310179 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:50:54.369941 initrd-setup-root-after-ignition[1129]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:50:54.369941 initrd-setup-root-after-ignition[1129]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:50:54.311890 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:50:54.395711 initrd-setup-root-after-ignition[1133]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:50:54.335625 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:50:54.345366 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:50:54.365117 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:50:54.411012 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:50:54.411141 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:50:54.420384 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:50:54.430615 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:50:54.438837 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:50:54.454114 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:50:54.480624 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:50:54.500112 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:50:54.516506 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:50:54.526957 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:50:54.537014 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:50:54.544834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:50:54.545028 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:50:54.559335 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:50:54.569217 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:50:54.577559 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:50:54.582862 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:50:54.593272 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:50:54.603045 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:50:54.611641 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:50:54.620837 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:50:54.630462 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:50:54.638578 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:50:54.646329 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:50:54.646507 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:50:54.659091 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:50:54.669312 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:50:54.679583 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:50:54.679691 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:50:54.690684 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:50:54.690873 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:50:54.706386 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:50:54.706558 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:50:54.716631 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:50:54.716785 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:50:54.725714 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:50:54.725813 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:50:54.751511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:50:54.777047 ignition[1154]: INFO : Ignition 2.19.0 Jan 23 23:50:54.777047 ignition[1154]: INFO : Stage: umount Jan 23 23:50:54.777047 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:54.777047 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:54.781135 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:50:54.830423 ignition[1154]: INFO : umount: umount passed Jan 23 23:50:54.830423 ignition[1154]: INFO : Ignition finished successfully Jan 23 23:50:54.786485 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:50:54.786706 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:50:54.792210 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:50:54.792357 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:50:54.809642 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:50:54.810307 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:50:54.811898 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:50:54.816756 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:50:54.816875 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:50:54.826520 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:50:54.826569 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:50:54.834682 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:50:54.834724 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:50:54.851774 systemd[1]: Stopped target network.target - Network. Jan 23 23:50:54.861015 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:50:54.861078 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:50:54.871448 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:50:54.880387 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:50:54.893051 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:50:54.901133 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:50:54.911564 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:50:54.919716 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:50:54.919777 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:50:54.927961 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:50:54.928004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:50:54.936236 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:50:54.936282 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:50:54.945937 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:50:54.945976 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:50:54.954715 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:50:54.959436 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:50:54.966997 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 23 23:50:54.973134 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:50:54.973237 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:50:54.983433 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:50:54.984910 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:50:54.991265 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:50:54.991352 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:50:55.003505 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:50:55.003618 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:50:55.169926 kernel: hv_netvsc 7ced8dd2-f270-7ced-8dd2-f2707ced8dd2 eth0: Data path switched from VF: enP38003s1 Jan 23 23:50:55.026084 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:50:55.034843 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:50:55.034946 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:50:55.044339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:50:55.044386 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:50:55.053999 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:50:55.054051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:50:55.062546 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:50:55.062590 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:50:55.074252 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:50:55.102115 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:50:55.102280 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:50:55.117741 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:50:55.117834 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:50:55.126139 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:50:55.126169 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:50:55.135666 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:50:55.135711 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:50:55.149939 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:50:55.150007 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:50:55.170027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:50:55.170103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:50:55.199135 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:50:55.209951 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:50:55.210032 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:50:55.225523 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:50:55.225590 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:50:55.237081 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:50:55.237131 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:50:55.248593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:50:55.248641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:55.258771 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:50:55.258904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:50:55.268700 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:50:55.268784 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:50:55.279750 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:50:55.279852 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:50:55.290093 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:50:55.298237 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:50:55.298341 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:50:55.321373 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:50:55.705098 systemd[1]: Switching root. Jan 23 23:50:55.730281 systemd-journald[217]: Journal stopped Jan 23 23:50:41.196866 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 23:50:41.196888 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:50:41.196896 kernel: KASLR enabled Jan 23 23:50:41.196902 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 23:50:41.196909 kernel: printk: bootconsole [pl11] enabled Jan 23 23:50:41.196915 kernel: efi: EFI v2.7 by EDK II Jan 23 23:50:41.196922 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 23 23:50:41.196928 kernel: random: crng init done Jan 23 23:50:41.196934 kernel: ACPI: Early table checksum verification disabled Jan 23 23:50:41.196940 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 23 23:50:41.196946 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196952 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196960 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 23:50:41.196966 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196973 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196980 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196986 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.196995 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.197001 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.197008 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 23:50:41.197014 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 23:50:41.197021 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 23:50:41.197027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 23:50:41.197034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 23 23:50:41.197040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 23 23:50:41.197047 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 23 23:50:41.197054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 23 23:50:41.197060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 23 23:50:41.197069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 23 23:50:41.197075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 23 23:50:41.197082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 23 23:50:41.197088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 23 23:50:41.197095 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 23 23:50:41.197101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 23 23:50:41.197108 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 23 23:50:41.197114 kernel: Zone ranges: Jan 23 23:50:41.197121 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 23:50:41.197127 kernel: DMA32 empty Jan 23 23:50:41.197133 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:50:41.197140 kernel: Movable zone start for each node Jan 23 23:50:41.197150 kernel: Early memory node ranges Jan 23 23:50:41.197157 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 23:50:41.197164 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 23 23:50:41.197171 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 23 23:50:41.197178 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 23 23:50:41.197187 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 23 23:50:41.197193 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 23 23:50:41.197200 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 23:50:41.197207 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 23:50:41.197214 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 23:50:41.197221 kernel: psci: probing for conduit method from ACPI. Jan 23 23:50:41.197228 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 23:50:41.197234 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:50:41.197241 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 23:50:41.197248 kernel: psci: SMC Calling Convention v1.4 Jan 23 23:50:41.197255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 23:50:41.197261 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 23:50:41.197270 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:50:41.197277 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:50:41.197284 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:50:41.197290 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:50:41.197297 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:50:41.197304 kernel: CPU features: detected: Hardware dirty bit management Jan 23 23:50:41.197310 kernel: CPU features: detected: Spectre-BHB Jan 23 23:50:41.197317 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 23:50:41.197324 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 23:50:41.197331 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 23:50:41.197338 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 23 23:50:41.197346 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 23:50:41.197353 kernel: alternatives: applying boot alternatives Jan 23 23:50:41.197361 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:50:41.197369 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:50:41.197376 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:50:41.197382 kernel: Fallback order for Node 0: 0 Jan 23 23:50:41.197389 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 23 23:50:41.197396 kernel: Policy zone: Normal Jan 23 23:50:41.197403 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:50:41.197410 kernel: software IO TLB: area num 2. Jan 23 23:50:41.197416 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 23 23:50:41.197425 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 23 23:50:41.197432 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:50:41.197438 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:50:41.197446 kernel: rcu: RCU event tracing is enabled. Jan 23 23:50:41.197453 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:50:41.197460 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:50:41.197467 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:50:41.197473 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:50:41.197480 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:50:41.197487 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:50:41.197494 kernel: GICv3: 960 SPIs implemented Jan 23 23:50:41.197502 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:50:41.197508 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:50:41.197515 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 23:50:41.197522 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 23:50:41.197528 kernel: ITS: No ITS available, not enabling LPIs Jan 23 23:50:41.197535 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:50:41.197542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:50:41.197549 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 23:50:41.197556 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 23:50:41.197563 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 23:50:41.197570 kernel: Console: colour dummy device 80x25 Jan 23 23:50:41.197578 kernel: printk: console [tty1] enabled Jan 23 23:50:41.197586 kernel: ACPI: Core revision 20230628 Jan 23 23:50:41.197593 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 23:50:41.197600 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:50:41.197607 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:50:41.197614 kernel: landlock: Up and running. Jan 23 23:50:41.197621 kernel: SELinux: Initializing. Jan 23 23:50:41.197628 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:50:41.197635 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:50:41.197643 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:50:41.197651 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:50:41.197658 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 23 23:50:41.197665 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 23:50:41.197680 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 23:50:41.197688 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:50:41.197695 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:50:41.197702 kernel: Remapping and enabling EFI services. Jan 23 23:50:41.197716 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:50:41.197724 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:50:41.197731 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 23:50:41.197738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 23:50:41.197747 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 23:50:41.197755 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:50:41.197762 kernel: SMP: Total of 2 processors activated. Jan 23 23:50:41.197770 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:50:41.197777 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 23:50:41.197786 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 23:50:41.197793 kernel: CPU features: detected: CRC32 instructions Jan 23 23:50:41.197801 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 23:50:41.197808 kernel: CPU features: detected: LSE atomic instructions Jan 23 23:50:41.197816 kernel: CPU features: detected: Privileged Access Never Jan 23 23:50:41.197823 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:50:41.197830 kernel: alternatives: applying system-wide alternatives Jan 23 23:50:41.197837 kernel: devtmpfs: initialized Jan 23 23:50:41.197845 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:50:41.197854 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:50:41.197861 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:50:41.197869 kernel: SMBIOS 3.1.0 present. Jan 23 23:50:41.197876 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 23 23:50:41.197884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:50:41.197891 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:50:41.197899 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:50:41.197906 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:50:41.197914 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:50:41.197922 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 23 23:50:41.197930 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:50:41.197937 kernel: cpuidle: using governor menu Jan 23 23:50:41.197944 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:50:41.197952 kernel: ASID allocator initialised with 32768 entries Jan 23 23:50:41.197959 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:50:41.197967 kernel: Serial: AMBA PL011 UART driver Jan 23 23:50:41.197974 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 23:50:41.197982 kernel: Modules: 0 pages in range for non-PLT usage Jan 23 23:50:41.197991 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:50:41.197998 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:50:41.198005 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:50:41.198013 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:50:41.198020 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:50:41.198028 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:50:41.198035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:50:41.198043 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:50:41.198050 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:50:41.198059 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:50:41.198066 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:50:41.198073 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:50:41.198081 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:50:41.198088 kernel: ACPI: Interpreter enabled Jan 23 23:50:41.198096 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:50:41.198103 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 23:50:41.198111 kernel: printk: console [ttyAMA0] enabled Jan 23 23:50:41.198118 kernel: printk: bootconsole [pl11] disabled Jan 23 23:50:41.198127 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 23:50:41.198134 kernel: iommu: Default domain type: Translated Jan 23 23:50:41.198142 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:50:41.198149 kernel: efivars: Registered efivars operations Jan 23 23:50:41.198156 kernel: vgaarb: loaded Jan 23 23:50:41.198164 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:50:41.198171 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:50:41.198178 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:50:41.198186 kernel: pnp: PnP ACPI init Jan 23 23:50:41.198195 kernel: pnp: PnP ACPI: found 0 devices Jan 23 23:50:41.198202 kernel: NET: Registered PF_INET protocol family Jan 23 23:50:41.198209 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:50:41.198217 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:50:41.198224 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:50:41.198232 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:50:41.198239 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:50:41.198247 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:50:41.198254 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:50:41.198263 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:50:41.198271 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:50:41.198278 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:50:41.198285 kernel: kvm [1]: HYP mode not available Jan 23 23:50:41.198293 kernel: Initialise system trusted keyrings Jan 23 23:50:41.198300 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:50:41.198307 kernel: Key type asymmetric registered Jan 23 23:50:41.198314 kernel: Asymmetric key parser 'x509' registered Jan 23 23:50:41.198322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:50:41.198331 kernel: io scheduler mq-deadline registered Jan 23 23:50:41.198338 kernel: io scheduler kyber registered Jan 23 23:50:41.198345 kernel: io scheduler bfq registered Jan 23 23:50:41.198353 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:50:41.198360 kernel: thunder_xcv, ver 1.0 Jan 23 23:50:41.198367 kernel: thunder_bgx, ver 1.0 Jan 23 23:50:41.198375 kernel: nicpf, ver 1.0 Jan 23 23:50:41.198382 kernel: nicvf, ver 1.0 Jan 23 23:50:41.198510 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:50:41.198583 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:50:40 UTC (1769212240) Jan 23 23:50:41.198593 kernel: efifb: probing for efifb Jan 23 23:50:41.198601 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 23:50:41.198609 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 23:50:41.198616 kernel: efifb: scrolling: redraw Jan 23 23:50:41.198623 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 23:50:41.198631 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:50:41.198638 kernel: fb0: EFI VGA frame buffer device Jan 23 23:50:41.198647 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 23:50:41.198654 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:50:41.198662 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 23 23:50:41.198670 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:50:41.198684 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:50:41.198691 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:50:41.198699 kernel: Segment Routing with IPv6 Jan 23 23:50:41.198706 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:50:41.198714 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:50:41.198723 kernel: Key type dns_resolver registered Jan 23 23:50:41.198730 kernel: registered taskstats version 1 Jan 23 23:50:41.198737 kernel: Loading compiled-in X.509 certificates Jan 23 23:50:41.198745 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:50:41.198752 kernel: Key type .fscrypt registered Jan 23 23:50:41.198759 kernel: Key type fscrypt-provisioning registered Jan 23 23:50:41.198766 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:50:41.198774 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:50:41.198781 kernel: ima: No architecture policies found Jan 23 23:50:41.198790 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:50:41.198798 kernel: clk: Disabling unused clocks Jan 23 23:50:41.198805 kernel: Freeing unused kernel memory: 39424K Jan 23 23:50:41.198812 kernel: Run /init as init process Jan 23 23:50:41.198819 kernel: with arguments: Jan 23 23:50:41.198826 kernel: /init Jan 23 23:50:41.198834 kernel: with environment: Jan 23 23:50:41.198841 kernel: HOME=/ Jan 23 23:50:41.198848 kernel: TERM=linux Jan 23 23:50:41.198857 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:50:41.198868 systemd[1]: Detected virtualization microsoft. Jan 23 23:50:41.198876 systemd[1]: Detected architecture arm64. Jan 23 23:50:41.198883 systemd[1]: Running in initrd. Jan 23 23:50:41.198891 systemd[1]: No hostname configured, using default hostname. Jan 23 23:50:41.198899 systemd[1]: Hostname set to . Jan 23 23:50:41.198907 systemd[1]: Initializing machine ID from random generator. Jan 23 23:50:41.198917 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:50:41.198925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:50:41.198933 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:50:41.198941 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:50:41.198950 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:50:41.198958 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:50:41.198966 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:50:41.198975 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:50:41.198985 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:50:41.198993 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:50:41.199001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:50:41.199009 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:50:41.199017 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:50:41.199024 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:50:41.199032 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:50:41.199040 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:50:41.199049 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:50:41.199057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:50:41.199065 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:50:41.199073 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:50:41.199081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:50:41.199089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:50:41.199097 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:50:41.199105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:50:41.199114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:50:41.199122 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:50:41.199130 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:50:41.199137 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:50:41.199145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:50:41.199167 systemd-journald[217]: Collecting audit messages is disabled. Jan 23 23:50:41.199189 systemd-journald[217]: Journal started Jan 23 23:50:41.199207 systemd-journald[217]: Runtime Journal (/run/log/journal/924e5c1b3f064108bdd3ea72997672ea) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:50:41.211127 systemd-modules-load[218]: Inserted module 'overlay' Jan 23 23:50:41.219980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:50:41.235691 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:50:41.243556 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:50:41.243605 kernel: Bridge firewalling registered Jan 23 23:50:41.243688 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 23 23:50:41.252067 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:50:41.267704 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:50:41.273582 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:50:41.282065 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:50:41.291095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:41.308916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:50:41.316856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:50:41.328852 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:50:41.349178 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:50:41.361491 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:50:41.369143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:50:41.376577 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:50:41.387022 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:50:41.409969 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:50:41.423803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:50:41.437548 dracut-cmdline[252]: dracut-dracut-053 Jan 23 23:50:41.438929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:50:41.465453 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:50:41.452855 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:50:41.506413 systemd-resolved[257]: Positive Trust Anchors: Jan 23 23:50:41.506428 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:50:41.506459 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:50:41.512220 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 23 23:50:41.513078 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:50:41.525345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:50:41.577684 kernel: SCSI subsystem initialized Jan 23 23:50:41.584684 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:50:41.594689 kernel: iscsi: registered transport (tcp) Jan 23 23:50:41.612221 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:50:41.612282 kernel: QLogic iSCSI HBA Driver Jan 23 23:50:41.645391 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:50:41.655932 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:50:41.690250 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:50:41.690327 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:50:41.695193 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:50:41.743700 kernel: raid6: neonx8 gen() 15780 MB/s Jan 23 23:50:41.762691 kernel: raid6: neonx4 gen() 15701 MB/s Jan 23 23:50:41.781693 kernel: raid6: neonx2 gen() 13311 MB/s Jan 23 23:50:41.801683 kernel: raid6: neonx1 gen() 10533 MB/s Jan 23 23:50:41.820680 kernel: raid6: int64x8 gen() 6979 MB/s Jan 23 23:50:41.839680 kernel: raid6: int64x4 gen() 7362 MB/s Jan 23 23:50:41.859684 kernel: raid6: int64x2 gen() 6145 MB/s Jan 23 23:50:41.881623 kernel: raid6: int64x1 gen() 5069 MB/s Jan 23 23:50:41.881642 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Jan 23 23:50:41.904387 kernel: raid6: .... xor() 12038 MB/s, rmw enabled Jan 23 23:50:41.904399 kernel: raid6: using neon recovery algorithm Jan 23 23:50:41.915110 kernel: xor: measuring software checksum speed Jan 23 23:50:41.915136 kernel: 8regs : 19807 MB/sec Jan 23 23:50:41.918038 kernel: 32regs : 19650 MB/sec Jan 23 23:50:41.920902 kernel: arm64_neon : 27132 MB/sec Jan 23 23:50:41.924203 kernel: xor: using function: arm64_neon (27132 MB/sec) Jan 23 23:50:41.974819 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:50:41.985244 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:50:41.998834 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:50:42.019102 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 23 23:50:42.023929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:50:42.042806 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:50:42.059452 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 23 23:50:42.093334 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:50:42.106121 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:50:42.144019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:50:42.160913 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:50:42.180218 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:50:42.188092 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:50:42.203705 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:50:42.217729 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:50:42.243864 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:50:42.273800 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 23:50:42.275530 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:50:42.290874 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:50:42.323670 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 23:50:42.323705 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 23:50:42.323715 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 23:50:42.323725 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 23:50:42.291098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:50:42.308994 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:50:42.364324 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 23:50:42.364344 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 23:50:42.364354 kernel: scsi host0: storvsc_host_t Jan 23 23:50:42.364499 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 23:50:42.364509 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 23:50:42.329764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:50:42.399218 kernel: scsi host1: storvsc_host_t Jan 23 23:50:42.399424 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 23:50:42.399438 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 23:50:42.399537 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 23:50:42.329971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:42.352772 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:50:42.412120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:50:42.425837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:50:42.425961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:42.458393 kernel: PTP clock support registered Jan 23 23:50:42.458416 kernel: hv_netvsc 7ced8dd2-f270-7ced-8dd2-f2707ced8dd2 eth0: VF slot 1 added Jan 23 23:50:42.442071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:50:42.476926 kernel: hv_vmbus: registering driver hv_pci Jan 23 23:50:42.476982 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 23:50:42.477169 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 23:50:42.488041 kernel: hv_pci 8ce285e7-9473-4ff8-925b-191df071bdab: PCI VMBus probing: Using version 0x10004 Jan 23 23:50:42.489212 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 23:50:42.490520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:42.521930 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 23:50:42.521953 kernel: hv_pci 8ce285e7-9473-4ff8-925b-191df071bdab: PCI host bridge to bus 9473:00 Jan 23 23:50:42.522113 kernel: hv_vmbus: registering driver hv_utils Jan 23 23:50:42.522124 kernel: pci_bus 9473:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 23:50:42.522225 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 23:50:42.522235 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 23:50:42.522244 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 23:50:42.935342 systemd-resolved[257]: Clock change detected. Flushing caches. Jan 23 23:50:42.953130 kernel: pci_bus 9473:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 23:50:42.958358 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 23:50:42.958560 kernel: pci 9473:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 23 23:50:42.953778 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:50:42.991182 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 23:50:42.991354 kernel: pci 9473:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:50:42.991375 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 23:50:42.991467 kernel: pci 9473:00:02.0: enabling Extended Tags Jan 23 23:50:42.991482 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 23:50:42.999936 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 23:50:43.008145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#277 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:50:43.008383 kernel: pci 9473:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9473:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 23 23:50:43.017356 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:50:43.033165 kernel: pci_bus 9473:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 23:50:43.042263 kernel: pci 9473:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 23:50:43.042488 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:43.046784 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 23:50:43.067956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#276 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:50:43.092012 kernel: mlx5_core 9473:00:02.0: enabling device (0000 -> 0002) Jan 23 23:50:43.097867 kernel: mlx5_core 9473:00:02.0: firmware version: 16.30.5026 Jan 23 23:50:43.292646 kernel: hv_netvsc 7ced8dd2-f270-7ced-8dd2-f2707ced8dd2 eth0: VF registering: eth1 Jan 23 23:50:43.292866 kernel: mlx5_core 9473:00:02.0 eth1: joined to eth0 Jan 23 23:50:43.298003 kernel: mlx5_core 9473:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 23:50:43.306874 kernel: mlx5_core 9473:00:02.0 enP38003s1: renamed from eth1 Jan 23 23:50:43.610280 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (486) Jan 23 23:50:43.618502 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 23:50:43.632009 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (497) Jan 23 23:50:43.638840 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 23 23:50:43.644167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 23:50:43.670319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:50:43.685853 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 23:50:43.704109 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:50:43.728883 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:43.736877 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:43.745904 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:44.747534 disk-uuid[608]: The operation has completed successfully. Jan 23 23:50:44.752120 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 23:50:44.821573 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:50:44.822882 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:50:44.846002 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:50:44.855851 sh[721]: Success Jan 23 23:50:44.884887 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:50:45.154332 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:50:45.162998 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:50:45.168881 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:50:45.205758 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:50:45.205813 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:50:45.211530 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:50:45.216543 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:50:45.220190 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:50:45.678647 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:50:45.683161 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:50:45.704155 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:50:45.711036 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:50:45.750127 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:45.750188 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:50:45.754247 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:50:45.801881 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:50:45.811529 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:50:45.821016 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:45.828254 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:50:45.846055 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:50:45.851284 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:50:45.858042 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:50:45.883764 systemd-networkd[902]: lo: Link UP Jan 23 23:50:45.883774 systemd-networkd[902]: lo: Gained carrier Jan 23 23:50:45.885355 systemd-networkd[902]: Enumeration completed Jan 23 23:50:45.887876 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:50:45.888312 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:50:45.888316 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:50:45.892767 systemd[1]: Reached target network.target - Network. Jan 23 23:50:45.969878 kernel: mlx5_core 9473:00:02.0 enP38003s1: Link up Jan 23 23:50:46.006951 kernel: hv_netvsc 7ced8dd2-f270-7ced-8dd2-f2707ced8dd2 eth0: Data path switched to VF: enP38003s1 Jan 23 23:50:46.007592 systemd-networkd[902]: enP38003s1: Link UP Jan 23 23:50:46.007821 systemd-networkd[902]: eth0: Link UP Jan 23 23:50:46.008212 systemd-networkd[902]: eth0: Gained carrier Jan 23 23:50:46.008222 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:50:46.015355 systemd-networkd[902]: enP38003s1: Gained carrier Jan 23 23:50:46.032898 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:50:46.849739 ignition[905]: Ignition 2.19.0 Jan 23 23:50:46.849754 ignition[905]: Stage: fetch-offline Jan 23 23:50:46.853302 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:50:46.849791 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:46.849800 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:46.849929 ignition[905]: parsed url from cmdline: "" Jan 23 23:50:46.849933 ignition[905]: no config URL provided Jan 23 23:50:46.849938 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:50:46.875094 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:50:46.849947 ignition[905]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:50:46.849952 ignition[905]: failed to fetch config: resource requires networking Jan 23 23:50:46.850162 ignition[905]: Ignition finished successfully Jan 23 23:50:46.897271 ignition[921]: Ignition 2.19.0 Jan 23 23:50:46.897277 ignition[921]: Stage: fetch Jan 23 23:50:46.897446 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:46.897456 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:46.897547 ignition[921]: parsed url from cmdline: "" Jan 23 23:50:46.897550 ignition[921]: no config URL provided Jan 23 23:50:46.897555 ignition[921]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:50:46.897561 ignition[921]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:50:46.897583 ignition[921]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 23:50:46.980377 ignition[921]: GET result: OK Jan 23 23:50:46.980449 ignition[921]: config has been read from IMDS userdata Jan 23 23:50:46.980493 ignition[921]: parsing config with SHA512: cd3503c7e3f668c33ff1a5057a6d1b23ed5f8e006609a820cc3211b58a472f16bc80a65e94b7a8aa145ac09aff5816ae7c575c4987589f3ce8814bc6e08eed4c Jan 23 23:50:46.986922 unknown[921]: fetched base config from "system" Jan 23 23:50:46.987315 ignition[921]: fetch: fetch complete Jan 23 23:50:46.986929 unknown[921]: fetched base config from "system" Jan 23 23:50:46.987319 ignition[921]: fetch: fetch passed Jan 23 23:50:46.986934 unknown[921]: fetched user config from "azure" Jan 23 23:50:46.987361 ignition[921]: Ignition finished successfully Jan 23 23:50:46.989116 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:50:47.002098 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:50:47.026715 ignition[927]: Ignition 2.19.0 Jan 23 23:50:47.026721 ignition[927]: Stage: kargs Jan 23 23:50:47.033805 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:50:47.026961 ignition[927]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:47.026971 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:47.028447 ignition[927]: kargs: kargs passed Jan 23 23:50:47.028503 ignition[927]: Ignition finished successfully Jan 23 23:50:47.060418 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:50:47.073548 ignition[933]: Ignition 2.19.0 Jan 23 23:50:47.073560 ignition[933]: Stage: disks Jan 23 23:50:47.077727 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:50:47.073731 ignition[933]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:47.083890 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:50:47.073741 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:47.092494 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:50:47.074689 ignition[933]: disks: disks passed Jan 23 23:50:47.101331 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:50:47.074738 ignition[933]: Ignition finished successfully Jan 23 23:50:47.110247 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:50:47.120022 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:50:47.145088 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:50:47.214908 systemd-fsck[942]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 23 23:50:47.221345 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:50:47.236112 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:50:47.292896 kernel: EXT4-fs (sda9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:50:47.293503 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:50:47.297482 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:50:47.323982 systemd-networkd[902]: eth0: Gained IPv6LL Jan 23 23:50:47.341997 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:50:47.364892 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Jan 23 23:50:47.375640 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:47.375700 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:50:47.379369 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:50:47.381072 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:50:47.392047 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 23:50:47.407959 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:50:47.406973 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:50:47.407017 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:50:47.414692 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:50:47.426618 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:50:47.447594 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:50:47.896930 coreos-metadata[969]: Jan 23 23:50:47.896 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:50:47.905362 coreos-metadata[969]: Jan 23 23:50:47.905 INFO Fetch successful Jan 23 23:50:47.905362 coreos-metadata[969]: Jan 23 23:50:47.905 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:50:47.920555 coreos-metadata[969]: Jan 23 23:50:47.920 INFO Fetch successful Jan 23 23:50:47.936735 coreos-metadata[969]: Jan 23 23:50:47.936 INFO wrote hostname ci-4081.3.6-n-73953443dc to /sysroot/etc/hostname Jan 23 23:50:47.939601 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:50:48.122616 initrd-setup-root[983]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:50:48.146147 initrd-setup-root[990]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:50:48.168629 initrd-setup-root[997]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:50:48.174482 initrd-setup-root[1004]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:50:49.600393 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:50:49.612078 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:50:49.619045 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:50:49.628827 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:50:49.642176 kernel: BTRFS info (device sda6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:49.666091 ignition[1071]: INFO : Ignition 2.19.0 Jan 23 23:50:49.666091 ignition[1071]: INFO : Stage: mount Jan 23 23:50:49.675206 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:49.675206 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:49.675206 ignition[1071]: INFO : mount: mount passed Jan 23 23:50:49.675206 ignition[1071]: INFO : Ignition finished successfully Jan 23 23:50:49.674319 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:50:49.680259 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:50:49.700976 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:50:49.719066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:50:49.750873 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1084) Jan 23 23:50:49.761971 kernel: BTRFS info (device sda6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:50:49.762009 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:50:49.765838 kernel: BTRFS info (device sda6): using free space tree Jan 23 23:50:49.773967 kernel: BTRFS info (device sda6): auto enabling async discard Jan 23 23:50:49.775383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:50:49.801904 ignition[1102]: INFO : Ignition 2.19.0 Jan 23 23:50:49.801904 ignition[1102]: INFO : Stage: files Jan 23 23:50:49.801904 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:49.801904 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:49.817845 ignition[1102]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:50:49.823055 ignition[1102]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:50:49.823055 ignition[1102]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:50:49.908880 ignition[1102]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:50:49.914950 ignition[1102]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:50:49.914950 ignition[1102]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:50:49.909338 unknown[1102]: wrote ssh authorized keys file for user: core Jan 23 23:50:49.930965 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:50:49.930965 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:50:49.930965 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:50:49.930965 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:50:49.968700 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:50:50.151253 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:50:50.160389 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:50:53.939044 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 23:50:54.211092 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:50:54.211092 ignition[1102]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:50:54.226942 ignition[1102]: INFO : files: files passed Jan 23 23:50:54.226942 ignition[1102]: INFO : Ignition finished successfully Jan 23 23:50:54.222155 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:50:54.271803 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:50:54.285067 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:50:54.310179 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:50:54.369941 initrd-setup-root-after-ignition[1129]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:50:54.369941 initrd-setup-root-after-ignition[1129]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:50:54.311890 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:50:54.395711 initrd-setup-root-after-ignition[1133]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:50:54.335625 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:50:54.345366 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:50:54.365117 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:50:54.411012 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:50:54.411141 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:50:54.420384 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:50:54.430615 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:50:54.438837 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:50:54.454114 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:50:54.480624 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:50:54.500112 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:50:54.516506 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:50:54.526957 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:50:54.537014 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:50:54.544834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:50:54.545028 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:50:54.559335 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:50:54.569217 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:50:54.577559 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:50:54.582862 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:50:54.593272 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:50:54.603045 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:50:54.611641 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:50:54.620837 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:50:54.630462 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:50:54.638578 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:50:54.646329 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:50:54.646507 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:50:54.659091 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:50:54.669312 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:50:54.679583 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:50:54.679691 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:50:54.690684 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:50:54.690873 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:50:54.706386 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:50:54.706558 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:50:54.716631 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:50:54.716785 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:50:54.725714 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 23:50:54.725813 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 23:50:54.751511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:50:54.777047 ignition[1154]: INFO : Ignition 2.19.0 Jan 23 23:50:54.777047 ignition[1154]: INFO : Stage: umount Jan 23 23:50:54.777047 ignition[1154]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:50:54.777047 ignition[1154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 23:50:54.781135 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:50:54.830423 ignition[1154]: INFO : umount: umount passed Jan 23 23:50:54.830423 ignition[1154]: INFO : Ignition finished successfully Jan 23 23:50:54.786485 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:50:54.786706 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:50:54.792210 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:50:54.792357 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:50:54.809642 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:50:54.810307 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:50:54.811898 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:50:54.816756 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:50:54.816875 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:50:54.826520 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:50:54.826569 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:50:54.834682 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:50:54.834724 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:50:54.851774 systemd[1]: Stopped target network.target - Network. Jan 23 23:50:54.861015 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:50:54.861078 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:50:54.871448 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:50:54.880387 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:50:54.893051 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:50:54.901133 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:50:54.911564 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:50:54.919716 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:50:54.919777 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:50:54.927961 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:50:54.928004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:50:54.936236 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:50:54.936282 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:50:54.945937 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:50:54.945976 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:50:54.954715 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:50:54.959436 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:50:54.966997 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 23 23:50:54.973134 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:50:54.973237 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:50:54.983433 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:50:54.984910 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:50:54.991265 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:50:54.991352 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:50:55.003505 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:50:55.003618 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:50:55.169926 kernel: hv_netvsc 7ced8dd2-f270-7ced-8dd2-f2707ced8dd2 eth0: Data path switched from VF: enP38003s1 Jan 23 23:50:55.026084 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:50:55.034843 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:50:55.034946 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:50:55.044339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:50:55.044386 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:50:55.053999 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:50:55.054051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:50:55.062546 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:50:55.062590 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:50:55.074252 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:50:55.102115 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:50:55.102280 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:50:55.117741 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:50:55.117834 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:50:55.126139 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:50:55.126169 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:50:55.135666 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:50:55.135711 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:50:55.149939 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:50:55.150007 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:50:55.170027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:50:55.170103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:50:55.199135 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:50:55.209951 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:50:55.210032 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:50:55.225523 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:50:55.225590 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:50:55.237081 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:50:55.237131 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:50:55.248593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:50:55.248641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:50:55.258771 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:50:55.258904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:50:55.268700 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:50:55.268784 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:50:55.279750 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:50:55.279852 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:50:55.290093 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:50:55.298237 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:50:55.298341 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:50:55.321373 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:50:55.705098 systemd[1]: Switching root. Jan 23 23:50:55.730281 systemd-journald[217]: Journal stopped Jan 23 23:51:00.217280 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 23 23:51:00.217309 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:51:00.217320 kernel: SELinux: policy capability open_perms=1 Jan 23 23:51:00.217331 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:51:00.217338 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:51:00.217346 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:51:00.217355 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:51:00.217364 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:51:00.217372 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:51:00.217380 kernel: audit: type=1403 audit(1769212257.365:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:51:00.217391 systemd[1]: Successfully loaded SELinux policy in 176.970ms. Jan 23 23:51:00.217401 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.836ms. Jan 23 23:51:00.217411 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:51:00.217420 systemd[1]: Detected virtualization microsoft. Jan 23 23:51:00.217430 systemd[1]: Detected architecture arm64. Jan 23 23:51:00.217441 systemd[1]: Detected first boot. Jan 23 23:51:00.217450 systemd[1]: Hostname set to . Jan 23 23:51:00.217459 systemd[1]: Initializing machine ID from random generator. Jan 23 23:51:00.217468 zram_generator::config[1213]: No configuration found. Jan 23 23:51:00.217478 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:51:00.217488 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:51:00.217498 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 23:51:00.217508 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:51:00.217520 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:51:00.217530 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:51:00.217539 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:51:00.217549 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:51:00.217559 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:51:00.217570 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:51:00.217579 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:51:00.217589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:51:00.217599 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:51:00.217608 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:51:00.217617 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:51:00.217627 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:51:00.217636 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:51:00.217646 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 23:51:00.217656 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:51:00.217666 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:51:00.217675 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:51:00.217687 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:51:00.217696 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:51:00.217706 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:51:00.217715 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:51:00.217728 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:51:00.217738 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:51:00.217747 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:51:00.217757 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:51:00.217766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:51:00.217776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:51:00.217786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:51:00.217797 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:51:00.217807 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:51:00.217817 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:51:00.217827 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:51:00.217837 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:51:00.217846 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:51:00.219898 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:51:00.219926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:51:00.219937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:51:00.219947 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:51:00.219958 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:51:00.219968 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:51:00.219978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:51:00.219988 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:51:00.219998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:51:00.220014 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:51:00.220024 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 23 23:51:00.220034 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 23 23:51:00.220045 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:51:00.220054 kernel: fuse: init (API version 7.39) Jan 23 23:51:00.220063 kernel: loop: module loaded Jan 23 23:51:00.220072 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:51:00.220085 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:51:00.220098 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:51:00.220134 systemd-journald[1332]: Collecting audit messages is disabled. Jan 23 23:51:00.220158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:51:00.220169 systemd-journald[1332]: Journal started Jan 23 23:51:00.220192 systemd-journald[1332]: Runtime Journal (/run/log/journal/6e21960a8fe949efb82d22ca3d1f1657) is 8.0M, max 78.5M, 70.5M free. Jan 23 23:51:00.242226 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:51:00.243242 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:51:00.248166 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:51:00.253433 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:51:00.258007 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:51:00.263214 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:51:00.268481 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:51:00.273238 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:51:00.279169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:51:00.285707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:51:00.286044 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:51:00.291962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:51:00.292117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:51:00.297584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:51:00.297739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:51:00.303610 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:51:00.303756 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:51:00.308994 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:51:00.309135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:51:00.314824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:51:00.320086 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:51:00.326596 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:51:00.334733 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:51:00.347252 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:51:00.355936 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:51:00.363753 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:51:00.368662 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:51:00.395001 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:51:00.404054 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:51:00.416895 kernel: ACPI: bus type drm_connector registered Jan 23 23:51:00.416408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:51:00.421741 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:51:00.427225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:51:00.429010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:51:00.445159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:51:00.454032 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:51:00.463502 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:51:00.466105 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:51:00.471236 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:51:00.476867 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:51:00.477823 systemd-journald[1332]: Time spent on flushing to /var/log/journal/6e21960a8fe949efb82d22ca3d1f1657 is 14.106ms for 885 entries. Jan 23 23:51:00.477823 systemd-journald[1332]: System Journal (/var/log/journal/6e21960a8fe949efb82d22ca3d1f1657) is 8.0M, max 2.6G, 2.6G free. Jan 23 23:51:00.528206 systemd-journald[1332]: Received client request to flush runtime journal. Jan 23 23:51:00.486596 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:51:00.497487 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:51:00.505431 udevadm[1372]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 23 23:51:00.529390 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:51:00.591227 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Jan 23 23:51:00.591242 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Jan 23 23:51:00.596255 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:51:00.610121 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:51:00.625300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:51:00.773771 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:51:00.790078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:51:00.807116 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jan 23 23:51:00.807132 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jan 23 23:51:00.813841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:51:01.157577 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:51:01.171100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:51:01.190481 systemd-udevd[1397]: Using default interface naming scheme 'v255'. Jan 23 23:51:01.330604 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:51:01.360228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:51:01.384144 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:51:01.407645 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 23 23:51:01.463135 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:51:01.525909 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 23:51:01.542486 kernel: hv_vmbus: registering driver hv_balloon Jan 23 23:51:01.542587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#35 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 23:51:01.542818 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 23:51:01.553677 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 23:51:01.579767 systemd-networkd[1410]: lo: Link UP Jan 23 23:51:01.579780 systemd-networkd[1410]: lo: Gained carrier Jan 23 23:51:01.582270 systemd-networkd[1410]: Enumeration completed Jan 23 23:51:01.582778 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:51:01.582781 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:51:01.587871 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 23:51:01.591164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:51:01.597146 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 23:51:01.603455 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:51:01.603868 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 23:51:01.611766 kernel: Console: switching to colour dummy device 80x25 Jan 23 23:51:01.618958 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 23:51:01.622614 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:51:01.630730 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:51:01.631161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:51:01.649126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:51:01.669071 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1402) Jan 23 23:51:01.674294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:51:01.674552 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:51:01.686429 kernel: mlx5_core 9473:00:02.0 enP38003s1: Link up Jan 23 23:51:01.695080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:51:01.713959 kernel: hv_netvsc 7ced8dd2-f270-7ced-8dd2-f2707ced8dd2 eth0: Data path switched to VF: enP38003s1 Jan 23 23:51:01.715015 systemd-networkd[1410]: enP38003s1: Link UP Jan 23 23:51:01.715114 systemd-networkd[1410]: eth0: Link UP Jan 23 23:51:01.715118 systemd-networkd[1410]: eth0: Gained carrier Jan 23 23:51:01.715132 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:51:01.720191 systemd-networkd[1410]: enP38003s1: Gained carrier Jan 23 23:51:01.731242 systemd-networkd[1410]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:51:01.767297 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 23:51:01.837377 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:51:01.850004 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:51:01.909078 lvm[1493]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:51:01.938295 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:51:01.944597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:51:01.953985 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:51:01.963580 lvm[1496]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:51:01.990528 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:51:01.996457 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:51:02.001712 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:51:02.001738 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:51:02.006311 systemd[1]: Reached target machines.target - Containers. Jan 23 23:51:02.011350 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:51:02.025009 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:51:02.031140 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:51:02.035824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:51:02.036898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:51:02.043338 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:51:02.051144 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:51:02.064753 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:51:02.114613 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:51:02.134886 kernel: loop0: detected capacity change from 0 to 114432 Jan 23 23:51:02.141592 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:51:02.142884 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:51:02.255043 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:51:02.500188 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:51:02.534874 kernel: loop1: detected capacity change from 0 to 207008 Jan 23 23:51:02.611884 kernel: loop2: detected capacity change from 0 to 31320 Jan 23 23:51:02.977886 kernel: loop3: detected capacity change from 0 to 114328 Jan 23 23:51:03.067965 systemd-networkd[1410]: eth0: Gained IPv6LL Jan 23 23:51:03.076325 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:51:03.388882 kernel: loop4: detected capacity change from 0 to 114432 Jan 23 23:51:03.401994 kernel: loop5: detected capacity change from 0 to 207008 Jan 23 23:51:03.418884 kernel: loop6: detected capacity change from 0 to 31320 Jan 23 23:51:03.429881 kernel: loop7: detected capacity change from 0 to 114328 Jan 23 23:51:03.438600 (sd-merge)[1523]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 23:51:03.439071 (sd-merge)[1523]: Merged extensions into '/usr'. Jan 23 23:51:03.443190 systemd[1]: Reloading requested from client PID 1503 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:51:03.443306 systemd[1]: Reloading... Jan 23 23:51:03.511885 zram_generator::config[1553]: No configuration found. Jan 23 23:51:03.645202 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:51:03.714135 systemd[1]: Reloading finished in 270 ms. Jan 23 23:51:03.726563 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:51:03.737983 systemd[1]: Starting ensure-sysext.service... Jan 23 23:51:03.744031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:51:03.751412 systemd[1]: Reloading requested from client PID 1611 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:51:03.751536 systemd[1]: Reloading... Jan 23 23:51:03.789867 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:51:03.790140 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:51:03.791478 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:51:03.791706 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Jan 23 23:51:03.791756 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Jan 23 23:51:03.808919 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:51:03.808930 systemd-tmpfiles[1612]: Skipping /boot Jan 23 23:51:03.817955 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:51:03.817968 systemd-tmpfiles[1612]: Skipping /boot Jan 23 23:51:03.824885 zram_generator::config[1640]: No configuration found. Jan 23 23:51:03.946798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:51:04.019849 systemd[1]: Reloading finished in 267 ms. Jan 23 23:51:04.034037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:51:04.059192 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:51:04.069046 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:51:04.078036 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:51:04.087045 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:51:04.104088 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:51:04.114828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:51:04.117928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:51:04.133140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:51:04.146644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:51:04.155035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:51:04.156106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:51:04.156273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:51:04.164048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:51:04.164299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:51:04.170308 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:51:04.170627 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:51:04.180357 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:51:04.195737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:51:04.201767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:51:04.209939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:51:04.229195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:51:04.235430 systemd-resolved[1709]: Positive Trust Anchors: Jan 23 23:51:04.235447 systemd-resolved[1709]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:51:04.235480 systemd-resolved[1709]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:51:04.238551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:51:04.244116 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:51:04.244625 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:51:04.251902 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:51:04.258756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:51:04.259115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:51:04.265592 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:51:04.266000 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:51:04.271255 augenrules[1744]: No rules Jan 23 23:51:04.271929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:51:04.272086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:51:04.278357 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:51:04.284247 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:51:04.285557 systemd-resolved[1709]: Using system hostname 'ci-4081.3.6-n-73953443dc'. Jan 23 23:51:04.287336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:51:04.292554 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:51:04.301427 systemd[1]: Finished ensure-sysext.service. Jan 23 23:51:04.309340 systemd[1]: Reached target network.target - Network. Jan 23 23:51:04.313192 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:51:04.317977 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:51:04.323154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:51:04.323227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:51:04.625429 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:51:04.631598 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:51:07.481615 ldconfig[1500]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:51:07.497682 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:51:07.510035 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:51:07.522794 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:51:07.528754 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:51:07.533509 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:51:07.539201 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:51:07.544984 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:51:07.549792 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:51:07.555382 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:51:07.560958 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:51:07.561113 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:51:07.565052 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:51:07.571952 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:51:07.578588 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:51:07.584138 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:51:07.589797 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:51:07.594760 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:51:07.599479 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:51:07.604473 systemd[1]: System is tainted: cgroupsv1 Jan 23 23:51:07.604626 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:51:07.604709 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:51:07.625933 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 23:51:07.632153 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:51:07.647016 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:51:07.666336 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:51:07.670365 (chronyd)[1770]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 23 23:51:07.672471 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:51:07.680040 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:51:07.684789 jq[1777]: false Jan 23 23:51:07.688450 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:51:07.688617 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 23 23:51:07.697031 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 23:51:07.701845 chronyd[1783]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 23 23:51:07.702773 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 23:51:07.711061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:51:07.713129 KVP[1780]: KVP starting; pid is:1780 Jan 23 23:51:07.718302 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:51:07.723433 chronyd[1783]: Timezone right/UTC failed leap second check, ignoring Jan 23 23:51:07.723614 chronyd[1783]: Loaded seccomp filter (level 2) Jan 23 23:51:07.735693 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:51:07.746942 extend-filesystems[1778]: Found loop4 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found loop5 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found loop6 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found loop7 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found sda Jan 23 23:51:07.746942 extend-filesystems[1778]: Found sda1 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found sda2 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found sda3 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found usr Jan 23 23:51:07.746942 extend-filesystems[1778]: Found sda4 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found sda6 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found sda7 Jan 23 23:51:07.746942 extend-filesystems[1778]: Found sda9 Jan 23 23:51:07.746942 extend-filesystems[1778]: Checking size of /dev/sda9 Jan 23 23:51:07.946500 kernel: hv_utils: KVP IC version 4.0 Jan 23 23:51:07.946539 coreos-metadata[1773]: Jan 23 23:51:07.936 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 23:51:07.946539 coreos-metadata[1773]: Jan 23 23:51:07.941 INFO Fetch successful Jan 23 23:51:07.946539 coreos-metadata[1773]: Jan 23 23:51:07.942 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 23:51:07.750000 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:51:07.799793 KVP[1780]: KVP LIC Version: 3.1 Jan 23 23:51:07.947206 extend-filesystems[1778]: Old size kept for /dev/sda9 Jan 23 23:51:07.947206 extend-filesystems[1778]: Found sr0 Jan 23 23:51:07.975133 coreos-metadata[1773]: Jan 23 23:51:07.946 INFO Fetch successful Jan 23 23:51:07.975133 coreos-metadata[1773]: Jan 23 23:51:07.947 INFO Fetching http://168.63.129.16/machine/97d3680a-0a96-4548-892c-373e3d8e2f03/419a50ce%2Dfd65%2D4238%2Da8de%2D3e569e889e65.%5Fci%2D4081.3.6%2Dn%2D73953443dc?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 23:51:07.975133 coreos-metadata[1773]: Jan 23 23:51:07.951 INFO Fetch successful Jan 23 23:51:07.975133 coreos-metadata[1773]: Jan 23 23:51:07.951 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 23:51:07.975133 coreos-metadata[1773]: Jan 23 23:51:07.963 INFO Fetch successful Jan 23 23:51:07.758964 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:51:07.843351 dbus-daemon[1776]: [system] SELinux support is enabled Jan 23 23:51:07.779492 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:51:07.801075 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:51:07.812003 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:51:07.815564 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:51:07.985173 update_engine[1809]: I20260123 23:51:07.944401 1809 main.cc:92] Flatcar Update Engine starting Jan 23 23:51:07.985173 update_engine[1809]: I20260123 23:51:07.958063 1809 update_check_scheduler.cc:74] Next update check in 9m28s Jan 23 23:51:07.854988 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:51:07.985484 jq[1816]: true Jan 23 23:51:07.869312 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:51:07.878436 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 23:51:07.898233 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:51:07.898479 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:51:07.898720 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:51:07.900370 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:51:07.927963 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:51:07.928204 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:51:07.933483 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:51:07.940170 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:51:07.940399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:51:07.997253 jq[1830]: true Jan 23 23:51:07.978116 (ntainerd)[1831]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:51:08.007930 systemd-logind[1805]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 23:51:08.008853 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:51:08.010023 systemd-logind[1805]: New seat seat0. Jan 23 23:51:08.029979 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:51:08.042218 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:51:08.042383 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:51:08.055544 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:51:08.056025 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:51:08.068246 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:51:08.076873 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1847) Jan 23 23:51:08.078685 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:51:08.093786 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:51:08.105394 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:51:08.107463 tar[1826]: linux-arm64/LICENSE Jan 23 23:51:08.107770 tar[1826]: linux-arm64/helm Jan 23 23:51:08.202288 bash[1881]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:51:08.202809 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:51:08.216954 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 23:51:08.342360 locksmithd[1866]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:51:08.710015 tar[1826]: linux-arm64/README.md Jan 23 23:51:08.730959 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:51:08.818884 containerd[1831]: time="2026-01-23T23:51:08.818594540Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:51:08.871260 containerd[1831]: time="2026-01-23T23:51:08.870142980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:51:08.877318 containerd[1831]: time="2026-01-23T23:51:08.877269140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:51:08.877442 containerd[1831]: time="2026-01-23T23:51:08.877427500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:51:08.877815 containerd[1831]: time="2026-01-23T23:51:08.877796020Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:51:08.878213 containerd[1831]: time="2026-01-23T23:51:08.878192900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:51:08.878306 containerd[1831]: time="2026-01-23T23:51:08.878294100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:51:08.879046 containerd[1831]: time="2026-01-23T23:51:08.879023860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:51:08.879537 containerd[1831]: time="2026-01-23T23:51:08.879516100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:51:08.880013 containerd[1831]: time="2026-01-23T23:51:08.879989540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:51:08.880110 containerd[1831]: time="2026-01-23T23:51:08.880096700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:51:08.880257 containerd[1831]: time="2026-01-23T23:51:08.880239340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:51:08.880395 containerd[1831]: time="2026-01-23T23:51:08.880323780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:51:08.880612 containerd[1831]: time="2026-01-23T23:51:08.880592340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:51:08.881875 containerd[1831]: time="2026-01-23T23:51:08.881036900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:51:08.882002 containerd[1831]: time="2026-01-23T23:51:08.881981340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:51:08.882091 containerd[1831]: time="2026-01-23T23:51:08.882078020Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:51:08.882245 containerd[1831]: time="2026-01-23T23:51:08.882228940Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:51:08.882359 containerd[1831]: time="2026-01-23T23:51:08.882344300Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:51:08.901826 containerd[1831]: time="2026-01-23T23:51:08.901784300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:51:08.902046 containerd[1831]: time="2026-01-23T23:51:08.902022620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:51:08.902144 containerd[1831]: time="2026-01-23T23:51:08.902131180Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:51:08.902274 containerd[1831]: time="2026-01-23T23:51:08.902203620Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:51:08.902274 containerd[1831]: time="2026-01-23T23:51:08.902222780Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:51:08.903663 containerd[1831]: time="2026-01-23T23:51:08.902481740Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:51:08.906675 containerd[1831]: time="2026-01-23T23:51:08.906637100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:51:08.907165 containerd[1831]: time="2026-01-23T23:51:08.907135580Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:51:08.907216 containerd[1831]: time="2026-01-23T23:51:08.907178980Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:51:08.907216 containerd[1831]: time="2026-01-23T23:51:08.907199540Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:51:08.907275 containerd[1831]: time="2026-01-23T23:51:08.907220140Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:51:08.907275 containerd[1831]: time="2026-01-23T23:51:08.907236340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:51:08.907275 containerd[1831]: time="2026-01-23T23:51:08.907254500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:51:08.907330 containerd[1831]: time="2026-01-23T23:51:08.907274660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:51:08.907330 containerd[1831]: time="2026-01-23T23:51:08.907293980Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:51:08.907330 containerd[1831]: time="2026-01-23T23:51:08.907311460Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:51:08.907330 containerd[1831]: time="2026-01-23T23:51:08.907327900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:51:08.907398 containerd[1831]: time="2026-01-23T23:51:08.907344020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:51:08.907398 containerd[1831]: time="2026-01-23T23:51:08.907368980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907398 containerd[1831]: time="2026-01-23T23:51:08.907384500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907456 containerd[1831]: time="2026-01-23T23:51:08.907409620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907456 containerd[1831]: time="2026-01-23T23:51:08.907429220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907456 containerd[1831]: time="2026-01-23T23:51:08.907445740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907548 containerd[1831]: time="2026-01-23T23:51:08.907465820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907548 containerd[1831]: time="2026-01-23T23:51:08.907483060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907548 containerd[1831]: time="2026-01-23T23:51:08.907497500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907548 containerd[1831]: time="2026-01-23T23:51:08.907514420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907548 containerd[1831]: time="2026-01-23T23:51:08.907535060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907633 containerd[1831]: time="2026-01-23T23:51:08.907551020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907633 containerd[1831]: time="2026-01-23T23:51:08.907566980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907633 containerd[1831]: time="2026-01-23T23:51:08.907582940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907633 containerd[1831]: time="2026-01-23T23:51:08.907605140Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:51:08.907699 containerd[1831]: time="2026-01-23T23:51:08.907632060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907699 containerd[1831]: time="2026-01-23T23:51:08.907649820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907699 containerd[1831]: time="2026-01-23T23:51:08.907665620Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:51:08.907755 containerd[1831]: time="2026-01-23T23:51:08.907720580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:51:08.907755 containerd[1831]: time="2026-01-23T23:51:08.907743980Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:51:08.907791 containerd[1831]: time="2026-01-23T23:51:08.907759100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:51:08.907791 containerd[1831]: time="2026-01-23T23:51:08.907775260Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:51:08.907828 containerd[1831]: time="2026-01-23T23:51:08.907789140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.907828 containerd[1831]: time="2026-01-23T23:51:08.907805820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:51:08.907828 containerd[1831]: time="2026-01-23T23:51:08.907816540Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:51:08.907898 containerd[1831]: time="2026-01-23T23:51:08.907831620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:51:08.908216 containerd[1831]: time="2026-01-23T23:51:08.908149420Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:51:08.908420 containerd[1831]: time="2026-01-23T23:51:08.908227420Z" level=info msg="Connect containerd service" Jan 23 23:51:08.908420 containerd[1831]: time="2026-01-23T23:51:08.908281820Z" level=info msg="using legacy CRI server" Jan 23 23:51:08.908420 containerd[1831]: time="2026-01-23T23:51:08.908289980Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:51:08.908420 containerd[1831]: time="2026-01-23T23:51:08.908389900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:51:08.915021 containerd[1831]: time="2026-01-23T23:51:08.909325660Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:51:08.915021 containerd[1831]: time="2026-01-23T23:51:08.909629380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:51:08.915021 containerd[1831]: time="2026-01-23T23:51:08.909672540Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:51:08.915021 containerd[1831]: time="2026-01-23T23:51:08.909711340Z" level=info msg="Start subscribing containerd event" Jan 23 23:51:08.915021 containerd[1831]: time="2026-01-23T23:51:08.909752700Z" level=info msg="Start recovering state" Jan 23 23:51:08.915021 containerd[1831]: time="2026-01-23T23:51:08.909816380Z" level=info msg="Start event monitor" Jan 23 23:51:08.915021 containerd[1831]: time="2026-01-23T23:51:08.909826780Z" level=info msg="Start snapshots syncer" Jan 23 23:51:08.915021 containerd[1831]: time="2026-01-23T23:51:08.909840020Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:51:08.915021 containerd[1831]: time="2026-01-23T23:51:08.909850500Z" level=info msg="Start streaming server" Jan 23 23:51:08.910046 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:51:08.916973 containerd[1831]: time="2026-01-23T23:51:08.916922300Z" level=info msg="containerd successfully booted in 0.101774s" Jan 23 23:51:09.051836 sshd_keygen[1815]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:51:09.075178 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:51:09.089706 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:51:09.096368 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 23:51:09.110338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:51:09.116811 (kubelet)[1946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:51:09.117507 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:51:09.117742 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:51:09.137517 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:51:09.146220 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 23:51:09.164080 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:51:09.176769 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:51:09.187194 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 23:51:09.192502 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:51:09.198061 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:51:09.206918 systemd[1]: Startup finished in 16.680s (kernel) + 12.017s (userspace) = 28.698s. Jan 23 23:51:09.498879 login[1960]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:51:09.504429 login[1961]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:51:09.515596 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:51:09.515804 systemd-logind[1805]: New session 1 of user core. Jan 23 23:51:09.521168 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:51:09.525712 systemd-logind[1805]: New session 2 of user core. Jan 23 23:51:09.553474 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:51:09.561573 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:51:09.594050 (systemd)[1975]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:51:09.649774 kubelet[1946]: E0123 23:51:09.649716 1946 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:51:09.653083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:51:09.653258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:51:09.748466 systemd[1975]: Queued start job for default target default.target. Jan 23 23:51:09.748878 systemd[1975]: Created slice app.slice - User Application Slice. Jan 23 23:51:09.748897 systemd[1975]: Reached target paths.target - Paths. Jan 23 23:51:09.748909 systemd[1975]: Reached target timers.target - Timers. Jan 23 23:51:09.759985 systemd[1975]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:51:09.773347 systemd[1975]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:51:09.773414 systemd[1975]: Reached target sockets.target - Sockets. Jan 23 23:51:09.773426 systemd[1975]: Reached target basic.target - Basic System. Jan 23 23:51:09.773468 systemd[1975]: Reached target default.target - Main User Target. Jan 23 23:51:09.773494 systemd[1975]: Startup finished in 170ms. Jan 23 23:51:09.773719 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:51:09.786187 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:51:09.788121 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:51:10.745833 waagent[1956]: 2026-01-23T23:51:10.745714Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 23 23:51:10.750494 waagent[1956]: 2026-01-23T23:51:10.750417Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 23 23:51:10.754451 waagent[1956]: 2026-01-23T23:51:10.754395Z INFO Daemon Daemon Python: 3.11.9 Jan 23 23:51:10.758224 waagent[1956]: 2026-01-23T23:51:10.758155Z INFO Daemon Daemon Run daemon Jan 23 23:51:10.762373 waagent[1956]: 2026-01-23T23:51:10.762305Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 23 23:51:10.770322 waagent[1956]: 2026-01-23T23:51:10.770072Z INFO Daemon Daemon Using waagent for provisioning Jan 23 23:51:10.774632 waagent[1956]: 2026-01-23T23:51:10.774583Z INFO Daemon Daemon Activate resource disk Jan 23 23:51:10.778574 waagent[1956]: 2026-01-23T23:51:10.778525Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 23:51:10.788602 waagent[1956]: 2026-01-23T23:51:10.788538Z INFO Daemon Daemon Found device: None Jan 23 23:51:10.792375 waagent[1956]: 2026-01-23T23:51:10.792321Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 23:51:10.799472 waagent[1956]: 2026-01-23T23:51:10.799423Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 23:51:10.810587 waagent[1956]: 2026-01-23T23:51:10.810524Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:51:10.815392 waagent[1956]: 2026-01-23T23:51:10.815338Z INFO Daemon Daemon Running default provisioning handler Jan 23 23:51:10.825978 waagent[1956]: 2026-01-23T23:51:10.825899Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 23:51:10.837077 waagent[1956]: 2026-01-23T23:51:10.837010Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 23:51:10.844971 waagent[1956]: 2026-01-23T23:51:10.844903Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 23:51:10.848896 waagent[1956]: 2026-01-23T23:51:10.848847Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 23:51:10.965443 waagent[1956]: 2026-01-23T23:51:10.965349Z INFO Daemon Daemon Successfully mounted dvd Jan 23 23:51:10.979110 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 23:51:10.982259 waagent[1956]: 2026-01-23T23:51:10.982175Z INFO Daemon Daemon Detect protocol endpoint Jan 23 23:51:10.985986 waagent[1956]: 2026-01-23T23:51:10.985935Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 23:51:10.990367 waagent[1956]: 2026-01-23T23:51:10.990319Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 23:51:10.995692 waagent[1956]: 2026-01-23T23:51:10.995648Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 23:51:10.999898 waagent[1956]: 2026-01-23T23:51:10.999815Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 23:51:11.003731 waagent[1956]: 2026-01-23T23:51:11.003685Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 23:51:11.049617 waagent[1956]: 2026-01-23T23:51:11.049569Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 23:51:11.054832 waagent[1956]: 2026-01-23T23:51:11.054804Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 23:51:11.059471 waagent[1956]: 2026-01-23T23:51:11.059426Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 23:51:11.418908 waagent[1956]: 2026-01-23T23:51:11.418739Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 23:51:11.424475 waagent[1956]: 2026-01-23T23:51:11.424381Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 23:51:11.432975 waagent[1956]: 2026-01-23T23:51:11.432925Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:51:11.453392 waagent[1956]: 2026-01-23T23:51:11.453345Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 23:51:11.458042 waagent[1956]: 2026-01-23T23:51:11.457993Z INFO Daemon Jan 23 23:51:11.460341 waagent[1956]: 2026-01-23T23:51:11.460295Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8f634cbf-1764-4090-969a-8a758c67b8ca eTag: 4766562724807816480 source: Fabric] Jan 23 23:51:11.469064 waagent[1956]: 2026-01-23T23:51:11.469015Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 23:51:11.474763 waagent[1956]: 2026-01-23T23:51:11.474715Z INFO Daemon Jan 23 23:51:11.476929 waagent[1956]: 2026-01-23T23:51:11.476890Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:51:11.486518 waagent[1956]: 2026-01-23T23:51:11.486474Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 23:51:11.640723 waagent[1956]: 2026-01-23T23:51:11.640622Z INFO Daemon Downloaded certificate {'thumbprint': '9FA7D53A58DBE7B6EEEB575CBE2EDEC6CA375504', 'hasPrivateKey': True} Jan 23 23:51:11.649238 waagent[1956]: 2026-01-23T23:51:11.649184Z INFO Daemon Fetch goal state completed Jan 23 23:51:11.659415 waagent[1956]: 2026-01-23T23:51:11.659368Z INFO Daemon Daemon Starting provisioning Jan 23 23:51:11.663450 waagent[1956]: 2026-01-23T23:51:11.663390Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 23:51:11.667560 waagent[1956]: 2026-01-23T23:51:11.667517Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-73953443dc] Jan 23 23:51:11.692786 waagent[1956]: 2026-01-23T23:51:11.687978Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-73953443dc] Jan 23 23:51:11.693201 waagent[1956]: 2026-01-23T23:51:11.693139Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 23:51:11.698083 waagent[1956]: 2026-01-23T23:51:11.698031Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 23:51:11.739985 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:51:11.739992 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:51:11.740037 systemd-networkd[1410]: eth0: DHCP lease lost Jan 23 23:51:11.741426 waagent[1956]: 2026-01-23T23:51:11.741333Z INFO Daemon Daemon Create user account if not exists Jan 23 23:51:11.745680 waagent[1956]: 2026-01-23T23:51:11.745624Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 23:51:11.749971 waagent[1956]: 2026-01-23T23:51:11.749923Z INFO Daemon Daemon Configure sudoer Jan 23 23:51:11.753666 waagent[1956]: 2026-01-23T23:51:11.753609Z INFO Daemon Daemon Configure sshd Jan 23 23:51:11.757274 waagent[1956]: 2026-01-23T23:51:11.757219Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 23:51:11.767517 waagent[1956]: 2026-01-23T23:51:11.767465Z INFO Daemon Daemon Deploy ssh public key. Jan 23 23:51:11.775035 systemd-networkd[1410]: eth0: DHCPv6 lease lost Jan 23 23:51:11.790957 systemd-networkd[1410]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 23:51:12.882842 waagent[1956]: 2026-01-23T23:51:12.882786Z INFO Daemon Daemon Provisioning complete Jan 23 23:51:12.899327 waagent[1956]: 2026-01-23T23:51:12.899278Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 23:51:12.904261 waagent[1956]: 2026-01-23T23:51:12.904190Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 23:51:12.912696 waagent[1956]: 2026-01-23T23:51:12.912609Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 23 23:51:13.046839 waagent[2033]: 2026-01-23T23:51:13.046197Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 23 23:51:13.046839 waagent[2033]: 2026-01-23T23:51:13.046352Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 23 23:51:13.046839 waagent[2033]: 2026-01-23T23:51:13.046405Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 23 23:51:13.082247 waagent[2033]: 2026-01-23T23:51:13.082159Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 23 23:51:13.082588 waagent[2033]: 2026-01-23T23:51:13.082549Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:51:13.082735 waagent[2033]: 2026-01-23T23:51:13.082702Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:51:13.090852 waagent[2033]: 2026-01-23T23:51:13.090774Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 23:51:13.096773 waagent[2033]: 2026-01-23T23:51:13.096725Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 23:51:13.097458 waagent[2033]: 2026-01-23T23:51:13.097403Z INFO ExtHandler Jan 23 23:51:13.097642 waagent[2033]: 2026-01-23T23:51:13.097607Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f561e5e3-a148-4092-a939-d599bfc348d3 eTag: 4766562724807816480 source: Fabric] Jan 23 23:51:13.098897 waagent[2033]: 2026-01-23T23:51:13.098022Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:51:13.098897 waagent[2033]: 2026-01-23T23:51:13.098611Z INFO ExtHandler Jan 23 23:51:13.098897 waagent[2033]: 2026-01-23T23:51:13.098684Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 23:51:13.102791 waagent[2033]: 2026-01-23T23:51:13.102753Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:51:13.173747 waagent[2033]: 2026-01-23T23:51:13.173584Z INFO ExtHandler Downloaded certificate {'thumbprint': '9FA7D53A58DBE7B6EEEB575CBE2EDEC6CA375504', 'hasPrivateKey': True} Jan 23 23:51:13.174279 waagent[2033]: 2026-01-23T23:51:13.174228Z INFO ExtHandler Fetch goal state completed Jan 23 23:51:13.189641 waagent[2033]: 2026-01-23T23:51:13.189580Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2033 Jan 23 23:51:13.189796 waagent[2033]: 2026-01-23T23:51:13.189762Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 23:51:13.191494 waagent[2033]: 2026-01-23T23:51:13.191445Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 23:51:13.191856 waagent[2033]: 2026-01-23T23:51:13.191818Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 23:51:13.224552 waagent[2033]: 2026-01-23T23:51:13.224504Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 23:51:13.224756 waagent[2033]: 2026-01-23T23:51:13.224717Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 23:51:13.231344 waagent[2033]: 2026-01-23T23:51:13.231298Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 23:51:13.238281 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit waagent.service)... Jan 23 23:51:13.238295 systemd[1]: Reloading... Jan 23 23:51:13.315882 zram_generator::config[2083]: No configuration found. Jan 23 23:51:13.426196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:51:13.504310 systemd[1]: Reloading finished in 265 ms. Jan 23 23:51:13.521892 waagent[2033]: 2026-01-23T23:51:13.521112Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 23 23:51:13.528094 systemd[1]: Reloading requested from client PID 2139 ('systemctl') (unit waagent.service)... Jan 23 23:51:13.528109 systemd[1]: Reloading... Jan 23 23:51:13.601896 zram_generator::config[2174]: No configuration found. Jan 23 23:51:13.713754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:51:13.787579 systemd[1]: Reloading finished in 259 ms. Jan 23 23:51:13.808499 waagent[2033]: 2026-01-23T23:51:13.807224Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 23:51:13.808499 waagent[2033]: 2026-01-23T23:51:13.807951Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 23:51:14.139580 waagent[2033]: 2026-01-23T23:51:14.139427Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 23:51:14.140104 waagent[2033]: 2026-01-23T23:51:14.140053Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 23 23:51:14.140908 waagent[2033]: 2026-01-23T23:51:14.140831Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 23:51:14.141059 waagent[2033]: 2026-01-23T23:51:14.140980Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:51:14.141212 waagent[2033]: 2026-01-23T23:51:14.141178Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:51:14.141581 waagent[2033]: 2026-01-23T23:51:14.141531Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 23:51:14.141864 waagent[2033]: 2026-01-23T23:51:14.141766Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 23:51:14.142023 waagent[2033]: 2026-01-23T23:51:14.141911Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 23:51:14.142393 waagent[2033]: 2026-01-23T23:51:14.142340Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 23:51:14.142598 waagent[2033]: 2026-01-23T23:51:14.142512Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 23:51:14.143102 waagent[2033]: 2026-01-23T23:51:14.143048Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 23:51:14.143257 waagent[2033]: 2026-01-23T23:51:14.143223Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 23:51:14.143428 waagent[2033]: 2026-01-23T23:51:14.143390Z INFO EnvHandler ExtHandler Configure routes Jan 23 23:51:14.143490 waagent[2033]: 2026-01-23T23:51:14.143462Z INFO EnvHandler ExtHandler Gateway:None Jan 23 23:51:14.143533 waagent[2033]: 2026-01-23T23:51:14.143510Z INFO EnvHandler ExtHandler Routes:None Jan 23 23:51:14.144187 waagent[2033]: 2026-01-23T23:51:14.144133Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 23:51:14.144187 waagent[2033]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 23:51:14.144187 waagent[2033]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 23:51:14.144187 waagent[2033]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 23:51:14.144187 waagent[2033]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:51:14.144187 waagent[2033]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:51:14.144187 waagent[2033]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 23:51:14.144887 waagent[2033]: 2026-01-23T23:51:14.144574Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 23:51:14.146320 waagent[2033]: 2026-01-23T23:51:14.146259Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 23:51:14.155052 waagent[2033]: 2026-01-23T23:51:14.154995Z INFO ExtHandler ExtHandler Jan 23 23:51:14.155138 waagent[2033]: 2026-01-23T23:51:14.155115Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 94af9483-20c9-4cd0-8680-cfb7fb270301 correlation e9f68aa3-434b-45f0-8305-1a6968adb595 created: 2026-01-23T23:50:08.772560Z] Jan 23 23:51:14.155525 waagent[2033]: 2026-01-23T23:51:14.155478Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:51:14.157986 waagent[2033]: 2026-01-23T23:51:14.157927Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 23 23:51:14.189679 waagent[2033]: 2026-01-23T23:51:14.189607Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A416FDC3-4549-461B-99B0-38B368F92A84;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 23 23:51:14.203214 waagent[2033]: 2026-01-23T23:51:14.203137Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 23:51:14.203214 waagent[2033]: Executing ['ip', '-a', '-o', 'link']: Jan 23 23:51:14.203214 waagent[2033]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 23:51:14.203214 waagent[2033]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d2:f2:70 brd ff:ff:ff:ff:ff:ff Jan 23 23:51:14.203214 waagent[2033]: 3: enP38003s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d2:f2:70 brd ff:ff:ff:ff:ff:ff\ altname enP38003p0s2 Jan 23 23:51:14.203214 waagent[2033]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 23:51:14.203214 waagent[2033]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 23:51:14.203214 waagent[2033]: 2: eth0 inet 10.200.20.33/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 23:51:14.203214 waagent[2033]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 23:51:14.203214 waagent[2033]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 23:51:14.203214 waagent[2033]: 2: eth0 inet6 fe80::7eed:8dff:fed2:f270/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 23:51:14.252461 waagent[2033]: 2026-01-23T23:51:14.252391Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 23 23:51:14.252461 waagent[2033]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:51:14.252461 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:51:14.252461 waagent[2033]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:51:14.252461 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:51:14.252461 waagent[2033]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:51:14.252461 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:51:14.252461 waagent[2033]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:51:14.252461 waagent[2033]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:51:14.252461 waagent[2033]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:51:14.255598 waagent[2033]: 2026-01-23T23:51:14.255538Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 23:51:14.255598 waagent[2033]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:51:14.255598 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:51:14.255598 waagent[2033]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:51:14.255598 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:51:14.255598 waagent[2033]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 23:51:14.255598 waagent[2033]: pkts bytes target prot opt in out source destination Jan 23 23:51:14.255598 waagent[2033]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 23:51:14.255598 waagent[2033]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 23:51:14.255598 waagent[2033]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 23:51:14.255851 waagent[2033]: 2026-01-23T23:51:14.255817Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 23:51:19.903938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:51:19.911045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:51:20.024465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:51:20.027323 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:51:20.125686 kubelet[2275]: E0123 23:51:20.125615 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:51:20.130067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:51:20.130236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:51:27.826037 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:51:27.832112 systemd[1]: Started sshd@0-10.200.20.33:22-10.200.16.10:51610.service - OpenSSH per-connection server daemon (10.200.16.10:51610). Jan 23 23:51:28.316445 sshd[2283]: Accepted publickey for core from 10.200.16.10 port 51610 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:51:28.317699 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:51:28.322992 systemd-logind[1805]: New session 3 of user core. Jan 23 23:51:28.329164 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:51:28.731084 systemd[1]: Started sshd@1-10.200.20.33:22-10.200.16.10:51612.service - OpenSSH per-connection server daemon (10.200.16.10:51612). Jan 23 23:51:29.245194 sshd[2288]: Accepted publickey for core from 10.200.16.10 port 51612 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:51:29.246650 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:51:29.250975 systemd-logind[1805]: New session 4 of user core. Jan 23 23:51:29.258240 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:51:29.595300 sshd[2288]: pam_unix(sshd:session): session closed for user core Jan 23 23:51:29.598652 systemd[1]: sshd@1-10.200.20.33:22-10.200.16.10:51612.service: Deactivated successfully. Jan 23 23:51:29.601292 systemd-logind[1805]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:51:29.601600 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:51:29.603033 systemd-logind[1805]: Removed session 4. Jan 23 23:51:29.683074 systemd[1]: Started sshd@2-10.200.20.33:22-10.200.16.10:39746.service - OpenSSH per-connection server daemon (10.200.16.10:39746). Jan 23 23:51:30.168741 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 39746 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:51:30.170063 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:51:30.170875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:51:30.177017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:51:30.181954 systemd-logind[1805]: New session 5 of user core. Jan 23 23:51:30.185770 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:51:30.521138 sshd[2296]: pam_unix(sshd:session): session closed for user core Jan 23 23:51:30.528697 systemd[1]: sshd@2-10.200.20.33:22-10.200.16.10:39746.service: Deactivated successfully. Jan 23 23:51:30.531753 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:51:30.535926 systemd-logind[1805]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:51:30.538068 systemd-logind[1805]: Removed session 5. Jan 23 23:51:30.539799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:51:30.548496 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:51:30.586059 kubelet[2316]: E0123 23:51:30.586001 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:51:30.588773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:51:30.588970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:51:30.607102 systemd[1]: Started sshd@3-10.200.20.33:22-10.200.16.10:39750.service - OpenSSH per-connection server daemon (10.200.16.10:39750). Jan 23 23:51:31.087411 sshd[2324]: Accepted publickey for core from 10.200.16.10 port 39750 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:51:31.088715 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:51:31.093000 systemd-logind[1805]: New session 6 of user core. Jan 23 23:51:31.103166 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:51:31.439088 sshd[2324]: pam_unix(sshd:session): session closed for user core Jan 23 23:51:31.442396 systemd[1]: sshd@3-10.200.20.33:22-10.200.16.10:39750.service: Deactivated successfully. Jan 23 23:51:31.445353 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:51:31.446084 systemd-logind[1805]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:51:31.446836 systemd-logind[1805]: Removed session 6. Jan 23 23:51:31.510650 chronyd[1783]: Selected source PHC0 Jan 23 23:51:31.530239 systemd[1]: Started sshd@4-10.200.20.33:22-10.200.16.10:39760.service - OpenSSH per-connection server daemon (10.200.16.10:39760). Jan 23 23:51:32.006878 sshd[2332]: Accepted publickey for core from 10.200.16.10 port 39760 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:51:32.008149 sshd[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:51:32.011810 systemd-logind[1805]: New session 7 of user core. Jan 23 23:51:32.019093 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:51:32.390076 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:51:32.390344 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:51:32.404683 sudo[2336]: pam_unix(sudo:session): session closed for user root Jan 23 23:51:32.480144 sshd[2332]: pam_unix(sshd:session): session closed for user core Jan 23 23:51:32.483854 systemd[1]: sshd@4-10.200.20.33:22-10.200.16.10:39760.service: Deactivated successfully. Jan 23 23:51:32.487151 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:51:32.487994 systemd-logind[1805]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:51:32.489110 systemd-logind[1805]: Removed session 7. Jan 23 23:51:32.569087 systemd[1]: Started sshd@5-10.200.20.33:22-10.200.16.10:39762.service - OpenSSH per-connection server daemon (10.200.16.10:39762). Jan 23 23:51:33.051586 sshd[2341]: Accepted publickey for core from 10.200.16.10 port 39762 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:51:33.053176 sshd[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:51:33.057290 systemd-logind[1805]: New session 8 of user core. Jan 23 23:51:33.064204 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:51:33.326411 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:51:33.326693 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:51:33.330088 sudo[2346]: pam_unix(sudo:session): session closed for user root Jan 23 23:51:33.334999 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:51:33.335289 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:51:33.347273 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:51:33.349353 auditctl[2349]: No rules Jan 23 23:51:33.349804 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:51:33.350061 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:51:33.360277 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:51:33.380601 augenrules[2368]: No rules Jan 23 23:51:33.382291 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:51:33.385118 sudo[2345]: pam_unix(sudo:session): session closed for user root Jan 23 23:51:33.462955 sshd[2341]: pam_unix(sshd:session): session closed for user core Jan 23 23:51:33.465598 systemd[1]: sshd@5-10.200.20.33:22-10.200.16.10:39762.service: Deactivated successfully. Jan 23 23:51:33.468709 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:51:33.469768 systemd-logind[1805]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:51:33.470779 systemd-logind[1805]: Removed session 8. Jan 23 23:51:33.547084 systemd[1]: Started sshd@6-10.200.20.33:22-10.200.16.10:39766.service - OpenSSH per-connection server daemon (10.200.16.10:39766). Jan 23 23:51:34.031227 sshd[2377]: Accepted publickey for core from 10.200.16.10 port 39766 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:51:34.032478 sshd[2377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:51:34.037552 systemd-logind[1805]: New session 9 of user core. Jan 23 23:51:34.044128 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:51:34.306438 sudo[2381]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:51:34.306710 sudo[2381]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:51:35.510093 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:51:35.510853 (dockerd)[2397]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:51:36.503489 dockerd[2397]: time="2026-01-23T23:51:36.503279709Z" level=info msg="Starting up" Jan 23 23:51:37.197145 dockerd[2397]: time="2026-01-23T23:51:37.196932029Z" level=info msg="Loading containers: start." Jan 23 23:51:37.338882 kernel: Initializing XFRM netlink socket Jan 23 23:51:37.509198 systemd-networkd[1410]: docker0: Link UP Jan 23 23:51:37.538565 dockerd[2397]: time="2026-01-23T23:51:37.537833349Z" level=info msg="Loading containers: done." Jan 23 23:51:37.558522 dockerd[2397]: time="2026-01-23T23:51:37.558466149Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:51:37.558773 dockerd[2397]: time="2026-01-23T23:51:37.558756469Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:51:37.558985 dockerd[2397]: time="2026-01-23T23:51:37.558965269Z" level=info msg="Daemon has completed initialization" Jan 23 23:51:37.620079 dockerd[2397]: time="2026-01-23T23:51:37.619905149Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:51:37.621006 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:51:38.410919 containerd[1831]: time="2026-01-23T23:51:38.410601509Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 23:51:39.246058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667080172.mount: Deactivated successfully. Jan 23 23:51:40.754165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:51:40.759065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:51:40.880035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:51:40.892260 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:51:40.962370 kubelet[2603]: E0123 23:51:40.962328 2603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:51:40.964505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:51:40.964663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:51:41.246259 containerd[1831]: time="2026-01-23T23:51:41.246138252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:41.249329 containerd[1831]: time="2026-01-23T23:51:41.249283270Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 23:51:41.255119 containerd[1831]: time="2026-01-23T23:51:41.255065616Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:41.262076 containerd[1831]: time="2026-01-23T23:51:41.260487955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:41.262076 containerd[1831]: time="2026-01-23T23:51:41.261555295Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.850914226s" Jan 23 23:51:41.262076 containerd[1831]: time="2026-01-23T23:51:41.261585696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 23:51:41.262422 containerd[1831]: time="2026-01-23T23:51:41.262400191Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 23:51:42.603362 containerd[1831]: time="2026-01-23T23:51:42.603306658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:42.607709 containerd[1831]: time="2026-01-23T23:51:42.607475668Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 23:51:42.610969 containerd[1831]: time="2026-01-23T23:51:42.610921557Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:42.616208 containerd[1831]: time="2026-01-23T23:51:42.616161290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:42.619013 containerd[1831]: time="2026-01-23T23:51:42.618968176Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.356391462s" Jan 23 23:51:42.619013 containerd[1831]: time="2026-01-23T23:51:42.619014657Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 23:51:42.620730 containerd[1831]: time="2026-01-23T23:51:42.620497340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 23:51:43.784760 containerd[1831]: time="2026-01-23T23:51:43.784710715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:43.788215 containerd[1831]: time="2026-01-23T23:51:43.788179004Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 23:51:43.792612 containerd[1831]: time="2026-01-23T23:51:43.792562854Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:43.798393 containerd[1831]: time="2026-01-23T23:51:43.798348509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:43.799516 containerd[1831]: time="2026-01-23T23:51:43.799381751Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.178840331s" Jan 23 23:51:43.799516 containerd[1831]: time="2026-01-23T23:51:43.799413751Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 23:51:43.800163 containerd[1831]: time="2026-01-23T23:51:43.799995473Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 23:51:44.923432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665295241.mount: Deactivated successfully. Jan 23 23:51:45.274461 containerd[1831]: time="2026-01-23T23:51:45.274414768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:45.277550 containerd[1831]: time="2026-01-23T23:51:45.277391056Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 23:51:45.281054 containerd[1831]: time="2026-01-23T23:51:45.280975985Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:45.285026 containerd[1831]: time="2026-01-23T23:51:45.284973794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:45.286112 containerd[1831]: time="2026-01-23T23:51:45.285560436Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.485537683s" Jan 23 23:51:45.286112 containerd[1831]: time="2026-01-23T23:51:45.285594956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 23:51:45.286509 containerd[1831]: time="2026-01-23T23:51:45.286487278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 23:51:45.918544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549718898.mount: Deactivated successfully. Jan 23 23:51:47.354463 containerd[1831]: time="2026-01-23T23:51:47.354415829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:47.359065 containerd[1831]: time="2026-01-23T23:51:47.359031281Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 23:51:47.363870 containerd[1831]: time="2026-01-23T23:51:47.363134971Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:47.368394 containerd[1831]: time="2026-01-23T23:51:47.368347343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:47.369705 containerd[1831]: time="2026-01-23T23:51:47.369534426Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.082887268s" Jan 23 23:51:47.369705 containerd[1831]: time="2026-01-23T23:51:47.369570706Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 23:51:47.370010 containerd[1831]: time="2026-01-23T23:51:47.369988227Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:51:47.937261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460190257.mount: Deactivated successfully. Jan 23 23:51:47.957893 containerd[1831]: time="2026-01-23T23:51:47.957138068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:47.959890 containerd[1831]: time="2026-01-23T23:51:47.959851876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:51:47.962957 containerd[1831]: time="2026-01-23T23:51:47.962915765Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:47.968568 containerd[1831]: time="2026-01-23T23:51:47.968525741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:47.969419 containerd[1831]: time="2026-01-23T23:51:47.969296264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 598.515395ms" Jan 23 23:51:47.969419 containerd[1831]: time="2026-01-23T23:51:47.969329384Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:51:47.970099 containerd[1831]: time="2026-01-23T23:51:47.969939746Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 23:51:48.628313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248329457.mount: Deactivated successfully. Jan 23 23:51:49.712885 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 23:51:51.004355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 23:51:51.013405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:51:51.117962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:51:51.123189 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:51:51.644385 kubelet[2739]: E0123 23:51:51.156662 2739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:51:51.158550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:51:51.158685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:51:52.374347 containerd[1831]: time="2026-01-23T23:51:52.374288379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:52.377875 containerd[1831]: time="2026-01-23T23:51:52.377689189Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 23:51:52.382568 containerd[1831]: time="2026-01-23T23:51:52.382536523Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:52.389673 containerd[1831]: time="2026-01-23T23:51:52.389221023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:51:52.390471 containerd[1831]: time="2026-01-23T23:51:52.390439506Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.42046868s" Jan 23 23:51:52.390525 containerd[1831]: time="2026-01-23T23:51:52.390471506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 23:51:53.524959 update_engine[1809]: I20260123 23:51:53.524883 1809 update_attempter.cc:509] Updating boot flags... Jan 23 23:51:53.581276 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2790) Jan 23 23:51:53.690890 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2790) Jan 23 23:51:57.615353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:51:57.622066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:51:57.649718 systemd[1]: Reloading requested from client PID 2852 ('systemctl') (unit session-9.scope)... Jan 23 23:51:57.649731 systemd[1]: Reloading... Jan 23 23:51:57.746901 zram_generator::config[2898]: No configuration found. Jan 23 23:51:57.841666 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:51:57.920227 systemd[1]: Reloading finished in 270 ms. Jan 23 23:51:57.971129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:51:57.972121 (kubelet)[2959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:51:57.975578 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:51:57.976438 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:51:57.976697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:51:57.980426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:51:58.135047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:51:58.138130 (kubelet)[2975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:51:58.175374 kubelet[2975]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:51:58.175374 kubelet[2975]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:51:58.175374 kubelet[2975]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:51:58.175374 kubelet[2975]: I0123 23:51:58.174616 2975 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:51:59.224876 kubelet[2975]: I0123 23:51:59.223653 2975 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:51:59.224876 kubelet[2975]: I0123 23:51:59.223689 2975 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:51:59.224876 kubelet[2975]: I0123 23:51:59.224124 2975 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:51:59.247366 kubelet[2975]: E0123 23:51:59.247333 2975 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:51:59.249184 kubelet[2975]: I0123 23:51:59.249166 2975 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:51:59.253470 kubelet[2975]: E0123 23:51:59.253436 2975 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:51:59.253470 kubelet[2975]: I0123 23:51:59.253470 2975 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:51:59.257244 kubelet[2975]: I0123 23:51:59.257222 2975 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:51:59.258202 kubelet[2975]: I0123 23:51:59.258162 2975 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:51:59.258368 kubelet[2975]: I0123 23:51:59.258205 2975 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-73953443dc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:51:59.258459 kubelet[2975]: I0123 23:51:59.258378 2975 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:51:59.258459 kubelet[2975]: I0123 23:51:59.258389 2975 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:51:59.258536 kubelet[2975]: I0123 23:51:59.258521 2975 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:51:59.261426 kubelet[2975]: I0123 23:51:59.261406 2975 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:51:59.261479 kubelet[2975]: I0123 23:51:59.261431 2975 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:51:59.261479 kubelet[2975]: I0123 23:51:59.261449 2975 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:51:59.261479 kubelet[2975]: I0123 23:51:59.261465 2975 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:51:59.267429 kubelet[2975]: I0123 23:51:59.267396 2975 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:51:59.267929 kubelet[2975]: I0123 23:51:59.267906 2975 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:51:59.267994 kubelet[2975]: W0123 23:51:59.267975 2975 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:51:59.268889 kubelet[2975]: I0123 23:51:59.268517 2975 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:51:59.268889 kubelet[2975]: I0123 23:51:59.268553 2975 server.go:1287] "Started kubelet" Jan 23 23:51:59.268889 kubelet[2975]: W0123 23:51:59.268675 2975 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 23 23:51:59.268889 kubelet[2975]: W0123 23:51:59.268691 2975 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-73953443dc&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 23 23:51:59.268889 kubelet[2975]: E0123 23:51:59.268719 2975 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:51:59.268889 kubelet[2975]: E0123 23:51:59.268734 2975 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-73953443dc&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:51:59.271035 kubelet[2975]: I0123 23:51:59.270766 2975 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:51:59.275791 kubelet[2975]: I0123 23:51:59.275253 2975 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:51:59.275791 kubelet[2975]: E0123 23:51:59.275623 2975 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-73953443dc.188d813b0e3ef1c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-73953443dc,UID:ci-4081.3.6-n-73953443dc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-73953443dc,},FirstTimestamp:2026-01-23 23:51:59.268532679 +0000 UTC m=+1.127566239,LastTimestamp:2026-01-23 23:51:59.268532679 +0000 UTC m=+1.127566239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-73953443dc,}" Jan 23 23:51:59.276425 kubelet[2975]: I0123 23:51:59.276356 2975 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:51:59.276686 kubelet[2975]: I0123 23:51:59.276666 2975 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:51:59.278701 kubelet[2975]: I0123 23:51:59.278660 2975 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:51:59.281889 kubelet[2975]: E0123 23:51:59.281374 2975 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:51:59.281889 kubelet[2975]: I0123 23:51:59.281600 2975 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:51:59.282948 kubelet[2975]: I0123 23:51:59.282917 2975 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:51:59.285202 kubelet[2975]: I0123 23:51:59.283048 2975 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:51:59.285202 kubelet[2975]: E0123 23:51:59.283198 2975 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-73953443dc\" not found" Jan 23 23:51:59.285304 kubelet[2975]: I0123 23:51:59.285249 2975 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:51:59.285401 kubelet[2975]: W0123 23:51:59.285364 2975 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 23 23:51:59.285431 kubelet[2975]: E0123 23:51:59.285411 2975 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:51:59.285492 kubelet[2975]: E0123 23:51:59.285471 2975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-73953443dc?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="200ms" Jan 23 23:51:59.286639 kubelet[2975]: I0123 23:51:59.286340 2975 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:51:59.286639 kubelet[2975]: I0123 23:51:59.286450 2975 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:51:59.287775 kubelet[2975]: I0123 23:51:59.287759 2975 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:51:59.329988 kubelet[2975]: I0123 23:51:59.329837 2975 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:51:59.331184 kubelet[2975]: I0123 23:51:59.330907 2975 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:51:59.331184 kubelet[2975]: I0123 23:51:59.330930 2975 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:51:59.331184 kubelet[2975]: I0123 23:51:59.330953 2975 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:51:59.331184 kubelet[2975]: I0123 23:51:59.330960 2975 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:51:59.331184 kubelet[2975]: E0123 23:51:59.331000 2975 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:51:59.331544 kubelet[2975]: W0123 23:51:59.331518 2975 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 23 23:51:59.331588 kubelet[2975]: E0123 23:51:59.331555 2975 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:51:59.347902 kubelet[2975]: I0123 23:51:59.347870 2975 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:51:59.347902 kubelet[2975]: I0123 23:51:59.347887 2975 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:51:59.348052 kubelet[2975]: I0123 23:51:59.347925 2975 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:51:59.353624 kubelet[2975]: I0123 23:51:59.353593 2975 policy_none.go:49] "None policy: Start" Jan 23 23:51:59.353624 kubelet[2975]: I0123 23:51:59.353624 2975 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:51:59.353715 kubelet[2975]: I0123 23:51:59.353637 2975 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:51:59.363589 kubelet[2975]: I0123 23:51:59.363563 2975 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:51:59.364749 kubelet[2975]: I0123 23:51:59.363954 2975 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:51:59.364749 kubelet[2975]: I0123 23:51:59.363969 2975 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:51:59.364749 kubelet[2975]: I0123 23:51:59.364663 2975 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:51:59.365793 kubelet[2975]: E0123 23:51:59.365753 2975 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:51:59.365903 kubelet[2975]: E0123 23:51:59.365798 2975 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-73953443dc\" not found" Jan 23 23:51:59.436561 kubelet[2975]: E0123 23:51:59.436249 2975 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.439243 kubelet[2975]: E0123 23:51:59.438931 2975 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.441401 kubelet[2975]: E0123 23:51:59.441368 2975 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.465719 kubelet[2975]: I0123 23:51:59.465699 2975 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.466238 kubelet[2975]: E0123 23:51:59.466206 2975 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.485884 kubelet[2975]: E0123 23:51:59.485776 2975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-73953443dc?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="400ms" Jan 23 23:51:59.486926 kubelet[2975]: I0123 23:51:59.486899 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.486996 kubelet[2975]: I0123 23:51:59.486936 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.486996 kubelet[2975]: I0123 23:51:59.486955 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.486996 kubelet[2975]: I0123 23:51:59.486976 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77f10eb2f9509809b8dd2b93daa003d8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-73953443dc\" (UID: \"77f10eb2f9509809b8dd2b93daa003d8\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.486996 kubelet[2975]: I0123 23:51:59.486992 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71b8c459907ac2597be4db8f98ae58c2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-73953443dc\" (UID: \"71b8c459907ac2597be4db8f98ae58c2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.487088 kubelet[2975]: I0123 23:51:59.487007 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71b8c459907ac2597be4db8f98ae58c2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-73953443dc\" (UID: \"71b8c459907ac2597be4db8f98ae58c2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.487088 kubelet[2975]: I0123 23:51:59.487024 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.487088 kubelet[2975]: I0123 23:51:59.487038 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71b8c459907ac2597be4db8f98ae58c2-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-73953443dc\" (UID: \"71b8c459907ac2597be4db8f98ae58c2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.487088 kubelet[2975]: I0123 23:51:59.487053 2975 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.668074 kubelet[2975]: I0123 23:51:59.667755 2975 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.668074 kubelet[2975]: E0123 23:51:59.668044 2975 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-73953443dc" Jan 23 23:51:59.737370 containerd[1831]: time="2026-01-23T23:51:59.737262341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-73953443dc,Uid:71b8c459907ac2597be4db8f98ae58c2,Namespace:kube-system,Attempt:0,}" Jan 23 23:51:59.740322 containerd[1831]: time="2026-01-23T23:51:59.740289070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-73953443dc,Uid:b6e5d0ef080adb1b2f4bfb489d8d2054,Namespace:kube-system,Attempt:0,}" Jan 23 23:51:59.743040 containerd[1831]: time="2026-01-23T23:51:59.743004157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-73953443dc,Uid:77f10eb2f9509809b8dd2b93daa003d8,Namespace:kube-system,Attempt:0,}" Jan 23 23:51:59.887153 kubelet[2975]: E0123 23:51:59.887112 2975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-73953443dc?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="800ms" Jan 23 23:52:00.070654 kubelet[2975]: I0123 23:52:00.070626 2975 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:00.070985 kubelet[2975]: E0123 23:52:00.070959 2975 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:00.290426 kubelet[2975]: W0123 23:52:00.290367 2975 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-73953443dc&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 23 23:52:00.290804 kubelet[2975]: E0123 23:52:00.290434 2975 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-73953443dc&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:52:00.385373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187682436.mount: Deactivated successfully. Jan 23 23:52:00.423930 containerd[1831]: time="2026-01-23T23:52:00.423886311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:52:00.427550 containerd[1831]: time="2026-01-23T23:52:00.427466961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:52:00.431891 containerd[1831]: time="2026-01-23T23:52:00.431395652Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:52:00.435426 containerd[1831]: time="2026-01-23T23:52:00.434679981Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:52:00.439272 containerd[1831]: time="2026-01-23T23:52:00.439242073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:52:00.443670 containerd[1831]: time="2026-01-23T23:52:00.442656482Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:52:00.445883 containerd[1831]: time="2026-01-23T23:52:00.445696050Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:52:00.450446 containerd[1831]: time="2026-01-23T23:52:00.450407263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:52:00.451305 containerd[1831]: time="2026-01-23T23:52:00.451277505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 708.213908ms" Jan 23 23:52:00.453525 containerd[1831]: time="2026-01-23T23:52:00.453088310Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 715.742088ms" Jan 23 23:52:00.456429 containerd[1831]: time="2026-01-23T23:52:00.456277719Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 715.923529ms" Jan 23 23:52:00.487315 kubelet[2975]: W0123 23:52:00.487217 2975 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 23 23:52:00.487315 kubelet[2975]: E0123 23:52:00.487283 2975 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:52:00.534080 kubelet[2975]: W0123 23:52:00.533978 2975 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 23 23:52:00.534080 kubelet[2975]: E0123 23:52:00.534046 2975 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:52:00.592403 kubelet[2975]: W0123 23:52:00.592303 2975 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 23 23:52:00.592403 kubelet[2975]: E0123 23:52:00.592368 2975 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:52:00.688433 kubelet[2975]: E0123 23:52:00.688321 2975 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-73953443dc?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="1.6s" Jan 23 23:52:00.872769 kubelet[2975]: I0123 23:52:00.872740 2975 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:00.873105 kubelet[2975]: E0123 23:52:00.873068 2975 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:00.977668 containerd[1831]: time="2026-01-23T23:52:00.977284602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:00.977668 containerd[1831]: time="2026-01-23T23:52:00.977368203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:00.977668 containerd[1831]: time="2026-01-23T23:52:00.977390603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:00.977668 containerd[1831]: time="2026-01-23T23:52:00.977482043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:00.982403 containerd[1831]: time="2026-01-23T23:52:00.982317016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:00.983384 containerd[1831]: time="2026-01-23T23:52:00.982948058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:00.983384 containerd[1831]: time="2026-01-23T23:52:00.982969898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:00.983384 containerd[1831]: time="2026-01-23T23:52:00.983073258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:00.990324 containerd[1831]: time="2026-01-23T23:52:00.989996397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:00.990324 containerd[1831]: time="2026-01-23T23:52:00.990047757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:00.990324 containerd[1831]: time="2026-01-23T23:52:00.990059557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:00.990324 containerd[1831]: time="2026-01-23T23:52:00.990147997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:01.050017 containerd[1831]: time="2026-01-23T23:52:01.049974798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-73953443dc,Uid:71b8c459907ac2597be4db8f98ae58c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"29128b3e6c369bb82845a58c13254ec827ba033f1a1b90edaa062a190a099756\"" Jan 23 23:52:01.053771 containerd[1831]: time="2026-01-23T23:52:01.053614608Z" level=info msg="CreateContainer within sandbox \"29128b3e6c369bb82845a58c13254ec827ba033f1a1b90edaa062a190a099756\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:52:01.056712 containerd[1831]: time="2026-01-23T23:52:01.056661616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-73953443dc,Uid:b6e5d0ef080adb1b2f4bfb489d8d2054,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b276659e1f928ccdd9c80b3b69920de370afc0ef51a1fb3c4037f0119b089f9\"" Jan 23 23:52:01.058766 containerd[1831]: time="2026-01-23T23:52:01.058734062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-73953443dc,Uid:77f10eb2f9509809b8dd2b93daa003d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5cfbf8a6b635f0ecf2fb363a4a8433f4d04e2e9c99c68242a48fba5e01d0291\"" Jan 23 23:52:01.060584 containerd[1831]: time="2026-01-23T23:52:01.060475067Z" level=info msg="CreateContainer within sandbox \"5b276659e1f928ccdd9c80b3b69920de370afc0ef51a1fb3c4037f0119b089f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:52:01.061943 containerd[1831]: time="2026-01-23T23:52:01.061822110Z" level=info msg="CreateContainer within sandbox \"a5cfbf8a6b635f0ecf2fb363a4a8433f4d04e2e9c99c68242a48fba5e01d0291\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:52:01.125430 containerd[1831]: time="2026-01-23T23:52:01.125259921Z" level=info msg="CreateContainer within sandbox \"29128b3e6c369bb82845a58c13254ec827ba033f1a1b90edaa062a190a099756\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e5fb886f9cf3b8f20a51b912acddbf293f94112107f8088e6292feed183fe3d6\"" Jan 23 23:52:01.126079 containerd[1831]: time="2026-01-23T23:52:01.126055723Z" level=info msg="StartContainer for \"e5fb886f9cf3b8f20a51b912acddbf293f94112107f8088e6292feed183fe3d6\"" Jan 23 23:52:01.138617 containerd[1831]: time="2026-01-23T23:52:01.138545957Z" level=info msg="CreateContainer within sandbox \"5b276659e1f928ccdd9c80b3b69920de370afc0ef51a1fb3c4037f0119b089f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cfa12e529f533ac98df1e172d278d849e7fd25ef018f2e2e9adaf7f3b010b875\"" Jan 23 23:52:01.139242 containerd[1831]: time="2026-01-23T23:52:01.139221759Z" level=info msg="StartContainer for \"cfa12e529f533ac98df1e172d278d849e7fd25ef018f2e2e9adaf7f3b010b875\"" Jan 23 23:52:01.151371 containerd[1831]: time="2026-01-23T23:52:01.150704310Z" level=info msg="CreateContainer within sandbox \"a5cfbf8a6b635f0ecf2fb363a4a8433f4d04e2e9c99c68242a48fba5e01d0291\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8383cfce0643f480ef78cb35b3264d7bad3e7b36b2c4798a38c4691efcb283b3\"" Jan 23 23:52:01.151371 containerd[1831]: time="2026-01-23T23:52:01.151105471Z" level=info msg="StartContainer for \"8383cfce0643f480ef78cb35b3264d7bad3e7b36b2c4798a38c4691efcb283b3\"" Jan 23 23:52:01.196498 containerd[1831]: time="2026-01-23T23:52:01.196315913Z" level=info msg="StartContainer for \"e5fb886f9cf3b8f20a51b912acddbf293f94112107f8088e6292feed183fe3d6\" returns successfully" Jan 23 23:52:01.222385 containerd[1831]: time="2026-01-23T23:52:01.222341463Z" level=info msg="StartContainer for \"cfa12e529f533ac98df1e172d278d849e7fd25ef018f2e2e9adaf7f3b010b875\" returns successfully" Jan 23 23:52:01.307903 containerd[1831]: time="2026-01-23T23:52:01.306494809Z" level=info msg="StartContainer for \"8383cfce0643f480ef78cb35b3264d7bad3e7b36b2c4798a38c4691efcb283b3\" returns successfully" Jan 23 23:52:01.343880 kubelet[2975]: E0123 23:52:01.342209 2975 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:01.348672 kubelet[2975]: E0123 23:52:01.348514 2975 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:01.356199 kubelet[2975]: E0123 23:52:01.356127 2975 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:02.357272 kubelet[2975]: E0123 23:52:02.357194 2975 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:02.359115 kubelet[2975]: E0123 23:52:02.357782 2975 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:02.477316 kubelet[2975]: I0123 23:52:02.477284 2975 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:03.358822 kubelet[2975]: E0123 23:52:03.358645 2975 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:03.727664 kubelet[2975]: E0123 23:52:03.727388 2975 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-73953443dc\" not found" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:03.765089 kubelet[2975]: E0123 23:52:03.764971 2975 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-73953443dc.188d813b0e3ef1c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-73953443dc,UID:ci-4081.3.6-n-73953443dc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-73953443dc,},FirstTimestamp:2026-01-23 23:51:59.268532679 +0000 UTC m=+1.127566239,LastTimestamp:2026-01-23 23:51:59.268532679 +0000 UTC m=+1.127566239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-73953443dc,}" Jan 23 23:52:03.821740 kubelet[2975]: I0123 23:52:03.821501 2975 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:03.821740 kubelet[2975]: E0123 23:52:03.821547 2975 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-73953443dc\": node \"ci-4081.3.6-n-73953443dc\" not found" Jan 23 23:52:03.849170 kubelet[2975]: E0123 23:52:03.847934 2975 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-73953443dc.188d813b0ed9de0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-73953443dc,UID:ci-4081.3.6-n-73953443dc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-73953443dc,},FirstTimestamp:2026-01-23 23:51:59.278685706 +0000 UTC m=+1.137719266,LastTimestamp:2026-01-23 23:51:59.278685706 +0000 UTC m=+1.137719266,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-73953443dc,}" Jan 23 23:52:03.884146 kubelet[2975]: I0123 23:52:03.883950 2975 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:03.909882 kubelet[2975]: E0123 23:52:03.908766 2975 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:03.909882 kubelet[2975]: I0123 23:52:03.908799 2975 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-73953443dc" Jan 23 23:52:03.914044 kubelet[2975]: E0123 23:52:03.913831 2975 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-73953443dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-73953443dc" Jan 23 23:52:03.914044 kubelet[2975]: I0123 23:52:03.913866 2975 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" Jan 23 23:52:03.919378 kubelet[2975]: E0123 23:52:03.919339 2975 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-73953443dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" Jan 23 23:52:04.265535 kubelet[2975]: I0123 23:52:04.265288 2975 apiserver.go:52] "Watching apiserver" Jan 23 23:52:04.286003 kubelet[2975]: I0123 23:52:04.285969 2975 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:52:06.180451 systemd[1]: Reloading requested from client PID 3250 ('systemctl') (unit session-9.scope)... Jan 23 23:52:06.180464 systemd[1]: Reloading... Jan 23 23:52:06.275895 zram_generator::config[3290]: No configuration found. Jan 23 23:52:06.385340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:52:06.417111 kubelet[2975]: I0123 23:52:06.416271 2975 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:06.426834 kubelet[2975]: W0123 23:52:06.426226 2975 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:52:06.470298 systemd[1]: Reloading finished in 289 ms. Jan 23 23:52:06.499264 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:52:06.516161 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:52:06.516460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:52:06.523457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:52:06.699047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:52:06.708281 (kubelet)[3365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:52:06.746819 kubelet[3365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:52:06.746819 kubelet[3365]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:52:06.746819 kubelet[3365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:52:06.747915 kubelet[3365]: I0123 23:52:06.746871 3365 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:52:06.759472 kubelet[3365]: I0123 23:52:06.759437 3365 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:52:06.760946 kubelet[3365]: I0123 23:52:06.759647 3365 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:52:06.760946 kubelet[3365]: I0123 23:52:06.759971 3365 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:52:06.764804 kubelet[3365]: I0123 23:52:06.764769 3365 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 23:52:06.767688 kubelet[3365]: I0123 23:52:06.767492 3365 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:52:06.770322 kubelet[3365]: E0123 23:52:06.770295 3365 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:52:06.770322 kubelet[3365]: I0123 23:52:06.770321 3365 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:52:06.772938 kubelet[3365]: I0123 23:52:06.772920 3365 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:52:06.773378 kubelet[3365]: I0123 23:52:06.773345 3365 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:52:06.773529 kubelet[3365]: I0123 23:52:06.773375 3365 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-73953443dc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:52:06.773597 kubelet[3365]: I0123 23:52:06.773539 3365 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:52:06.773597 kubelet[3365]: I0123 23:52:06.773547 3365 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:52:06.773597 kubelet[3365]: I0123 23:52:06.773590 3365 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:52:06.773939 kubelet[3365]: I0123 23:52:06.773711 3365 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:52:06.773939 kubelet[3365]: I0123 23:52:06.773725 3365 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:52:06.773939 kubelet[3365]: I0123 23:52:06.773743 3365 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:52:06.773939 kubelet[3365]: I0123 23:52:06.773752 3365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:52:06.777837 kubelet[3365]: I0123 23:52:06.777729 3365 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:52:06.778863 kubelet[3365]: I0123 23:52:06.778530 3365 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:52:06.781864 kubelet[3365]: I0123 23:52:06.781169 3365 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:52:06.781864 kubelet[3365]: I0123 23:52:06.781205 3365 server.go:1287] "Started kubelet" Jan 23 23:52:06.787242 kubelet[3365]: I0123 23:52:06.787219 3365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:52:06.791462 kubelet[3365]: I0123 23:52:06.791167 3365 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:52:06.794864 kubelet[3365]: I0123 23:52:06.792052 3365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:52:06.794864 kubelet[3365]: I0123 23:52:06.792309 3365 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:52:06.795124 kubelet[3365]: I0123 23:52:06.795105 3365 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:52:06.798597 kubelet[3365]: I0123 23:52:06.797700 3365 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:52:06.799912 kubelet[3365]: E0123 23:52:06.799159 3365 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-73953443dc\" not found" Jan 23 23:52:06.803583 kubelet[3365]: I0123 23:52:06.800703 3365 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:52:06.803810 kubelet[3365]: I0123 23:52:06.803797 3365 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:52:06.808228 kubelet[3365]: I0123 23:52:06.808208 3365 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:52:06.813387 kubelet[3365]: I0123 23:52:06.813345 3365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:52:06.814231 kubelet[3365]: I0123 23:52:06.814212 3365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:52:06.814269 kubelet[3365]: I0123 23:52:06.814244 3365 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:52:06.814269 kubelet[3365]: I0123 23:52:06.814263 3365 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:52:06.814269 kubelet[3365]: I0123 23:52:06.814269 3365 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:52:06.814335 kubelet[3365]: E0123 23:52:06.814309 3365 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:52:06.819323 kubelet[3365]: I0123 23:52:06.819292 3365 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:52:06.819675 kubelet[3365]: I0123 23:52:06.819396 3365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:52:06.825303 kubelet[3365]: I0123 23:52:06.825220 3365 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:52:06.826006 kubelet[3365]: E0123 23:52:06.825981 3365 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:52:06.879312 kubelet[3365]: I0123 23:52:06.878894 3365 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:52:06.879312 kubelet[3365]: I0123 23:52:06.878915 3365 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:52:06.879312 kubelet[3365]: I0123 23:52:06.878936 3365 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:52:06.879312 kubelet[3365]: I0123 23:52:06.879095 3365 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:52:06.879834 kubelet[3365]: I0123 23:52:06.879106 3365 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:52:06.879909 kubelet[3365]: I0123 23:52:06.879900 3365 policy_none.go:49] "None policy: Start" Jan 23 23:52:06.879968 kubelet[3365]: I0123 23:52:06.879960 3365 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:52:06.880017 kubelet[3365]: I0123 23:52:06.880010 3365 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:52:06.880251 kubelet[3365]: I0123 23:52:06.880179 3365 state_mem.go:75] "Updated machine memory state" Jan 23 23:52:06.884431 kubelet[3365]: I0123 23:52:06.883171 3365 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:52:06.884818 kubelet[3365]: I0123 23:52:06.884789 3365 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:52:06.885425 kubelet[3365]: I0123 23:52:06.885220 3365 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:52:06.887076 kubelet[3365]: E0123 23:52:06.887019 3365 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:52:06.887273 kubelet[3365]: I0123 23:52:06.887176 3365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:52:06.915898 kubelet[3365]: I0123 23:52:06.914942 3365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" Jan 23 23:52:06.915898 kubelet[3365]: I0123 23:52:06.915319 3365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:06.915898 kubelet[3365]: I0123 23:52:06.915548 3365 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-73953443dc" Jan 23 23:52:06.925214 kubelet[3365]: W0123 23:52:06.925174 3365 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:52:06.930710 kubelet[3365]: W0123 23:52:06.930688 3365 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:52:06.931695 kubelet[3365]: W0123 23:52:06.931670 3365 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 23:52:06.931767 kubelet[3365]: E0123 23:52:06.931720 3365 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:06.990199 kubelet[3365]: I0123 23:52:06.989989 3365 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.019398 kubelet[3365]: I0123 23:52:07.019278 3365 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.019736 kubelet[3365]: I0123 23:52:07.019502 3365 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.104887 kubelet[3365]: I0123 23:52:07.104830 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.104887 kubelet[3365]: I0123 23:52:07.104889 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.105054 kubelet[3365]: I0123 23:52:07.104913 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77f10eb2f9509809b8dd2b93daa003d8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-73953443dc\" (UID: \"77f10eb2f9509809b8dd2b93daa003d8\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.105054 kubelet[3365]: I0123 23:52:07.104930 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71b8c459907ac2597be4db8f98ae58c2-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-73953443dc\" (UID: \"71b8c459907ac2597be4db8f98ae58c2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.105054 kubelet[3365]: I0123 23:52:07.104945 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71b8c459907ac2597be4db8f98ae58c2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-73953443dc\" (UID: \"71b8c459907ac2597be4db8f98ae58c2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.105054 kubelet[3365]: I0123 23:52:07.104960 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.105054 kubelet[3365]: I0123 23:52:07.104975 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.105163 kubelet[3365]: I0123 23:52:07.104991 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6e5d0ef080adb1b2f4bfb489d8d2054-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-73953443dc\" (UID: \"b6e5d0ef080adb1b2f4bfb489d8d2054\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.105163 kubelet[3365]: I0123 23:52:07.105005 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71b8c459907ac2597be4db8f98ae58c2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-73953443dc\" (UID: \"71b8c459907ac2597be4db8f98ae58c2\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" Jan 23 23:52:07.776085 kubelet[3365]: I0123 23:52:07.776044 3365 apiserver.go:52] "Watching apiserver" Jan 23 23:52:07.804228 kubelet[3365]: I0123 23:52:07.804190 3365 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:52:07.868603 kubelet[3365]: I0123 23:52:07.868538 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-73953443dc" podStartSLOduration=1.868518407 podStartE2EDuration="1.868518407s" podCreationTimestamp="2026-01-23 23:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:52:07.856893614 +0000 UTC m=+1.145754300" watchObservedRunningTime="2026-01-23 23:52:07.868518407 +0000 UTC m=+1.157379093" Jan 23 23:52:07.882286 kubelet[3365]: I0123 23:52:07.882211 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-73953443dc" podStartSLOduration=1.8821942470000002 podStartE2EDuration="1.882194247s" podCreationTimestamp="2026-01-23 23:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:52:07.86933889 +0000 UTC m=+1.158199576" watchObservedRunningTime="2026-01-23 23:52:07.882194247 +0000 UTC m=+1.171054933" Jan 23 23:52:07.900018 kubelet[3365]: I0123 23:52:07.899853 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-73953443dc" podStartSLOduration=1.899837298 podStartE2EDuration="1.899837298s" podCreationTimestamp="2026-01-23 23:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:52:07.883102569 +0000 UTC m=+1.171963255" watchObservedRunningTime="2026-01-23 23:52:07.899837298 +0000 UTC m=+1.188697944" Jan 23 23:52:11.552794 kubelet[3365]: I0123 23:52:11.552713 3365 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:52:11.553693 kubelet[3365]: I0123 23:52:11.553265 3365 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:52:11.553724 containerd[1831]: time="2026-01-23T23:52:11.553026696Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:52:11.629816 kubelet[3365]: I0123 23:52:11.629782 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e995be75-6603-489a-9f54-89dcb2f7bd24-kube-proxy\") pod \"kube-proxy-4tdq8\" (UID: \"e995be75-6603-489a-9f54-89dcb2f7bd24\") " pod="kube-system/kube-proxy-4tdq8" Jan 23 23:52:11.630128 kubelet[3365]: I0123 23:52:11.630041 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e995be75-6603-489a-9f54-89dcb2f7bd24-xtables-lock\") pod \"kube-proxy-4tdq8\" (UID: \"e995be75-6603-489a-9f54-89dcb2f7bd24\") " pod="kube-system/kube-proxy-4tdq8" Jan 23 23:52:11.630128 kubelet[3365]: I0123 23:52:11.630070 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e995be75-6603-489a-9f54-89dcb2f7bd24-lib-modules\") pod \"kube-proxy-4tdq8\" (UID: \"e995be75-6603-489a-9f54-89dcb2f7bd24\") " pod="kube-system/kube-proxy-4tdq8" Jan 23 23:52:11.630128 kubelet[3365]: I0123 23:52:11.630091 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8mk9\" (UniqueName: \"kubernetes.io/projected/e995be75-6603-489a-9f54-89dcb2f7bd24-kube-api-access-g8mk9\") pod \"kube-proxy-4tdq8\" (UID: \"e995be75-6603-489a-9f54-89dcb2f7bd24\") " pod="kube-system/kube-proxy-4tdq8" Jan 23 23:52:11.743020 kubelet[3365]: E0123 23:52:11.742967 3365 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 23:52:11.743020 kubelet[3365]: E0123 23:52:11.743008 3365 projected.go:194] Error preparing data for projected volume kube-api-access-g8mk9 for pod kube-system/kube-proxy-4tdq8: configmap "kube-root-ca.crt" not found Jan 23 23:52:11.743163 kubelet[3365]: E0123 23:52:11.743086 3365 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e995be75-6603-489a-9f54-89dcb2f7bd24-kube-api-access-g8mk9 podName:e995be75-6603-489a-9f54-89dcb2f7bd24 nodeName:}" failed. No retries permitted until 2026-01-23 23:52:12.243062361 +0000 UTC m=+5.531923047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g8mk9" (UniqueName: "kubernetes.io/projected/e995be75-6603-489a-9f54-89dcb2f7bd24-kube-api-access-g8mk9") pod "kube-proxy-4tdq8" (UID: "e995be75-6603-489a-9f54-89dcb2f7bd24") : configmap "kube-root-ca.crt" not found Jan 23 23:52:12.521927 containerd[1831]: time="2026-01-23T23:52:12.521832715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4tdq8,Uid:e995be75-6603-489a-9f54-89dcb2f7bd24,Namespace:kube-system,Attempt:0,}" Jan 23 23:52:12.562226 containerd[1831]: time="2026-01-23T23:52:12.562143911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:12.563059 containerd[1831]: time="2026-01-23T23:52:12.562790113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:12.563059 containerd[1831]: time="2026-01-23T23:52:12.562849593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:12.563059 containerd[1831]: time="2026-01-23T23:52:12.563021554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:12.621913 containerd[1831]: time="2026-01-23T23:52:12.621055120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4tdq8,Uid:e995be75-6603-489a-9f54-89dcb2f7bd24,Namespace:kube-system,Attempt:0,} returns sandbox id \"d84a2a9fcf364c689588df04f5515c4ded969f4c49773c7d28be8ed9c764fb1c\"" Jan 23 23:52:12.633513 containerd[1831]: time="2026-01-23T23:52:12.632521193Z" level=info msg="CreateContainer within sandbox \"d84a2a9fcf364c689588df04f5515c4ded969f4c49773c7d28be8ed9c764fb1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:52:12.654223 kubelet[3365]: I0123 23:52:12.654018 3365 status_manager.go:890] "Failed to get status for pod" podUID="02669a09-96ed-41d5-83af-936b81ec4828" pod="tigera-operator/tigera-operator-7dcd859c48-484rq" err="pods \"tigera-operator-7dcd859c48-484rq\" is forbidden: User \"system:node:ci-4081.3.6-n-73953443dc\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.6-n-73953443dc' and this object" Jan 23 23:52:12.681198 containerd[1831]: time="2026-01-23T23:52:12.681066132Z" level=info msg="CreateContainer within sandbox \"d84a2a9fcf364c689588df04f5515c4ded969f4c49773c7d28be8ed9c764fb1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8bb093e4bae49025f6cb7fb1528500301ee735bdc7f2b13ee62e78819a3eac00\"" Jan 23 23:52:12.682502 containerd[1831]: time="2026-01-23T23:52:12.682389976Z" level=info msg="StartContainer for \"8bb093e4bae49025f6cb7fb1528500301ee735bdc7f2b13ee62e78819a3eac00\"" Jan 23 23:52:12.734644 containerd[1831]: time="2026-01-23T23:52:12.734597766Z" level=info msg="StartContainer for \"8bb093e4bae49025f6cb7fb1528500301ee735bdc7f2b13ee62e78819a3eac00\" returns successfully" Jan 23 23:52:12.737517 kubelet[3365]: I0123 23:52:12.737241 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02669a09-96ed-41d5-83af-936b81ec4828-var-lib-calico\") pod \"tigera-operator-7dcd859c48-484rq\" (UID: \"02669a09-96ed-41d5-83af-936b81ec4828\") " pod="tigera-operator/tigera-operator-7dcd859c48-484rq" Jan 23 23:52:12.737517 kubelet[3365]: I0123 23:52:12.737285 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6wm7\" (UniqueName: \"kubernetes.io/projected/02669a09-96ed-41d5-83af-936b81ec4828-kube-api-access-z6wm7\") pod \"tigera-operator-7dcd859c48-484rq\" (UID: \"02669a09-96ed-41d5-83af-936b81ec4828\") " pod="tigera-operator/tigera-operator-7dcd859c48-484rq" Jan 23 23:52:12.889391 kubelet[3365]: I0123 23:52:12.888187 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4tdq8" podStartSLOduration=1.8881671660000001 podStartE2EDuration="1.888167166s" podCreationTimestamp="2026-01-23 23:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:52:12.872516801 +0000 UTC m=+6.161377487" watchObservedRunningTime="2026-01-23 23:52:12.888167166 +0000 UTC m=+6.177027812" Jan 23 23:52:12.960060 containerd[1831]: time="2026-01-23T23:52:12.960013492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-484rq,Uid:02669a09-96ed-41d5-83af-936b81ec4828,Namespace:tigera-operator,Attempt:0,}" Jan 23 23:52:13.008239 containerd[1831]: time="2026-01-23T23:52:13.007992350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:13.008239 containerd[1831]: time="2026-01-23T23:52:13.008050790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:13.008239 containerd[1831]: time="2026-01-23T23:52:13.008066670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:13.008239 containerd[1831]: time="2026-01-23T23:52:13.008191711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:13.050029 containerd[1831]: time="2026-01-23T23:52:13.049988271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-484rq,Uid:02669a09-96ed-41d5-83af-936b81ec4828,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2ce60ac58a8cf171c3fd955c127e26b5014f5899c178beaa3bf44c73c2fa0248\"" Jan 23 23:52:13.052282 containerd[1831]: time="2026-01-23T23:52:13.052242997Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 23:52:13.341980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616783917.mount: Deactivated successfully. Jan 23 23:52:14.635774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1801553973.mount: Deactivated successfully. Jan 23 23:52:15.487219 containerd[1831]: time="2026-01-23T23:52:15.487169622Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:15.490556 containerd[1831]: time="2026-01-23T23:52:15.490373191Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 23:52:15.495080 containerd[1831]: time="2026-01-23T23:52:15.495022445Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:15.500271 containerd[1831]: time="2026-01-23T23:52:15.500220140Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:15.501124 containerd[1831]: time="2026-01-23T23:52:15.501007702Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.448728185s" Jan 23 23:52:15.501124 containerd[1831]: time="2026-01-23T23:52:15.501040302Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 23:52:15.504889 containerd[1831]: time="2026-01-23T23:52:15.503763830Z" level=info msg="CreateContainer within sandbox \"2ce60ac58a8cf171c3fd955c127e26b5014f5899c178beaa3bf44c73c2fa0248\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 23:52:15.531742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971190152.mount: Deactivated successfully. Jan 23 23:52:15.542846 containerd[1831]: time="2026-01-23T23:52:15.542732822Z" level=info msg="CreateContainer within sandbox \"2ce60ac58a8cf171c3fd955c127e26b5014f5899c178beaa3bf44c73c2fa0248\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"448c2f08c0d37e64a7907cbc0adec3bc09cacb323cc3f179c6c2220035fa2566\"" Jan 23 23:52:15.544513 containerd[1831]: time="2026-01-23T23:52:15.543637504Z" level=info msg="StartContainer for \"448c2f08c0d37e64a7907cbc0adec3bc09cacb323cc3f179c6c2220035fa2566\"" Jan 23 23:52:15.594925 containerd[1831]: time="2026-01-23T23:52:15.594881171Z" level=info msg="StartContainer for \"448c2f08c0d37e64a7907cbc0adec3bc09cacb323cc3f179c6c2220035fa2566\" returns successfully" Jan 23 23:52:16.791773 kubelet[3365]: I0123 23:52:16.791709 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-484rq" podStartSLOduration=2.341100415 podStartE2EDuration="4.791688645s" podCreationTimestamp="2026-01-23 23:52:12 +0000 UTC" firstStartedPulling="2026-01-23 23:52:13.051239754 +0000 UTC m=+6.340100440" lastFinishedPulling="2026-01-23 23:52:15.501827984 +0000 UTC m=+8.790688670" observedRunningTime="2026-01-23 23:52:15.878597825 +0000 UTC m=+9.167458471" watchObservedRunningTime="2026-01-23 23:52:16.791688645 +0000 UTC m=+10.080549331" Jan 23 23:52:21.579984 sudo[2381]: pam_unix(sudo:session): session closed for user root Jan 23 23:52:21.658969 sshd[2377]: pam_unix(sshd:session): session closed for user core Jan 23 23:52:21.667194 systemd[1]: sshd@6-10.200.20.33:22-10.200.16.10:39766.service: Deactivated successfully. Jan 23 23:52:21.672381 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:52:21.673787 systemd-logind[1805]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:52:21.674950 systemd-logind[1805]: Removed session 9. Jan 23 23:52:32.652447 kubelet[3365]: I0123 23:52:32.652268 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f78c958d-b065-4e40-9155-1bb1c86fbc60-tigera-ca-bundle\") pod \"calico-typha-677899d665-grg4g\" (UID: \"f78c958d-b065-4e40-9155-1bb1c86fbc60\") " pod="calico-system/calico-typha-677899d665-grg4g" Jan 23 23:52:32.652447 kubelet[3365]: I0123 23:52:32.652317 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxvjh\" (UniqueName: \"kubernetes.io/projected/f78c958d-b065-4e40-9155-1bb1c86fbc60-kube-api-access-hxvjh\") pod \"calico-typha-677899d665-grg4g\" (UID: \"f78c958d-b065-4e40-9155-1bb1c86fbc60\") " pod="calico-system/calico-typha-677899d665-grg4g" Jan 23 23:52:32.652447 kubelet[3365]: I0123 23:52:32.652337 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f78c958d-b065-4e40-9155-1bb1c86fbc60-typha-certs\") pod \"calico-typha-677899d665-grg4g\" (UID: \"f78c958d-b065-4e40-9155-1bb1c86fbc60\") " pod="calico-system/calico-typha-677899d665-grg4g" Jan 23 23:52:32.753795 kubelet[3365]: I0123 23:52:32.753269 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-var-lib-calico\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.753795 kubelet[3365]: I0123 23:52:32.753312 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-tigera-ca-bundle\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.753795 kubelet[3365]: I0123 23:52:32.753329 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-cni-net-dir\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.753795 kubelet[3365]: I0123 23:52:32.753343 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-lib-modules\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.753795 kubelet[3365]: I0123 23:52:32.753358 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-xtables-lock\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.754871 kubelet[3365]: I0123 23:52:32.753385 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-node-certs\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.754871 kubelet[3365]: I0123 23:52:32.753400 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-cni-log-dir\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.754871 kubelet[3365]: I0123 23:52:32.753415 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-flexvol-driver-host\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.754871 kubelet[3365]: I0123 23:52:32.753433 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-var-run-calico\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.754871 kubelet[3365]: I0123 23:52:32.753474 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9vf\" (UniqueName: \"kubernetes.io/projected/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-kube-api-access-4r9vf\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.754990 kubelet[3365]: I0123 23:52:32.753502 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-cni-bin-dir\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.754990 kubelet[3365]: I0123 23:52:32.753517 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a4c79aae-e3dd-4093-b8bd-4d07f2444d04-policysync\") pod \"calico-node-kcvvr\" (UID: \"a4c79aae-e3dd-4093-b8bd-4d07f2444d04\") " pod="calico-system/calico-node-kcvvr" Jan 23 23:52:32.788763 containerd[1831]: time="2026-01-23T23:52:32.788709122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-677899d665-grg4g,Uid:f78c958d-b065-4e40-9155-1bb1c86fbc60,Namespace:calico-system,Attempt:0,}" Jan 23 23:52:32.841266 containerd[1831]: time="2026-01-23T23:52:32.841157224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:32.841266 containerd[1831]: time="2026-01-23T23:52:32.841220064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:32.841266 containerd[1831]: time="2026-01-23T23:52:32.841230904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:32.841663 containerd[1831]: time="2026-01-23T23:52:32.841333625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:32.858354 kubelet[3365]: E0123 23:52:32.858315 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.858730 kubelet[3365]: W0123 23:52:32.858521 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.858730 kubelet[3365]: E0123 23:52:32.858554 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.860225 kubelet[3365]: E0123 23:52:32.860007 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.860225 kubelet[3365]: W0123 23:52:32.860129 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.860225 kubelet[3365]: E0123 23:52:32.860148 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.861365 kubelet[3365]: E0123 23:52:32.860954 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.861365 kubelet[3365]: W0123 23:52:32.860970 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.861365 kubelet[3365]: E0123 23:52:32.861163 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.862556 kubelet[3365]: E0123 23:52:32.862516 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.862651 kubelet[3365]: W0123 23:52:32.862639 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.862833 kubelet[3365]: E0123 23:52:32.862722 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.863414 kubelet[3365]: E0123 23:52:32.863295 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.863414 kubelet[3365]: W0123 23:52:32.863312 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.864407 kubelet[3365]: E0123 23:52:32.863530 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.866981 kubelet[3365]: E0123 23:52:32.864722 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.866981 kubelet[3365]: W0123 23:52:32.864759 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.866981 kubelet[3365]: E0123 23:52:32.864822 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.866981 kubelet[3365]: E0123 23:52:32.865024 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.866981 kubelet[3365]: W0123 23:52:32.865034 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.866981 kubelet[3365]: E0123 23:52:32.866559 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.866981 kubelet[3365]: E0123 23:52:32.866780 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.866981 kubelet[3365]: W0123 23:52:32.866791 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.866981 kubelet[3365]: E0123 23:52:32.866907 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.867444 kubelet[3365]: E0123 23:52:32.867334 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.867444 kubelet[3365]: W0123 23:52:32.867377 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.867557 kubelet[3365]: E0123 23:52:32.867544 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.868005 kubelet[3365]: E0123 23:52:32.867991 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.868135 kubelet[3365]: W0123 23:52:32.868122 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.868953 kubelet[3365]: E0123 23:52:32.868934 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.870012 kubelet[3365]: E0123 23:52:32.869578 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.870012 kubelet[3365]: W0123 23:52:32.869592 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.870012 kubelet[3365]: E0123 23:52:32.869974 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.870633 kubelet[3365]: E0123 23:52:32.870614 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.870633 kubelet[3365]: W0123 23:52:32.870629 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.870732 kubelet[3365]: E0123 23:52:32.870643 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.871169 kubelet[3365]: E0123 23:52:32.870958 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.871169 kubelet[3365]: W0123 23:52:32.870973 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.871169 kubelet[3365]: E0123 23:52:32.870985 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.871829 kubelet[3365]: E0123 23:52:32.871685 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.871829 kubelet[3365]: W0123 23:52:32.871699 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.871829 kubelet[3365]: E0123 23:52:32.871712 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.897841 kubelet[3365]: E0123 23:52:32.896070 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.897841 kubelet[3365]: W0123 23:52:32.896092 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.897841 kubelet[3365]: E0123 23:52:32.896113 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.897841 kubelet[3365]: E0123 23:52:32.896657 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:52:32.955652 kubelet[3365]: E0123 23:52:32.955547 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.955806 kubelet[3365]: W0123 23:52:32.955788 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.955890 kubelet[3365]: E0123 23:52:32.955877 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.956163 kubelet[3365]: E0123 23:52:32.956149 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.957457 kubelet[3365]: W0123 23:52:32.957401 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.958706 kubelet[3365]: E0123 23:52:32.958682 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.959076 kubelet[3365]: E0123 23:52:32.959062 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.959170 kubelet[3365]: W0123 23:52:32.959157 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.959238 kubelet[3365]: E0123 23:52:32.959218 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.959488 kubelet[3365]: E0123 23:52:32.959475 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.959659 kubelet[3365]: W0123 23:52:32.959646 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.960050 kubelet[3365]: E0123 23:52:32.960036 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.962412 kubelet[3365]: E0123 23:52:32.962293 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.962412 kubelet[3365]: W0123 23:52:32.962307 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.962412 kubelet[3365]: E0123 23:52:32.962326 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.962630 kubelet[3365]: E0123 23:52:32.962590 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.962630 kubelet[3365]: W0123 23:52:32.962602 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.962630 kubelet[3365]: E0123 23:52:32.962613 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.963001 kubelet[3365]: E0123 23:52:32.962901 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.963001 kubelet[3365]: W0123 23:52:32.962913 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.963001 kubelet[3365]: E0123 23:52:32.962923 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.964187 kubelet[3365]: E0123 23:52:32.963920 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.964187 kubelet[3365]: W0123 23:52:32.963936 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.964187 kubelet[3365]: E0123 23:52:32.963948 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.965239 kubelet[3365]: E0123 23:52:32.965134 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.965239 kubelet[3365]: W0123 23:52:32.965148 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.965239 kubelet[3365]: E0123 23:52:32.965161 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.967089 kubelet[3365]: E0123 23:52:32.966971 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.967089 kubelet[3365]: W0123 23:52:32.966986 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.967089 kubelet[3365]: E0123 23:52:32.966999 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.967277 kubelet[3365]: E0123 23:52:32.967267 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.967385 kubelet[3365]: W0123 23:52:32.967331 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.967385 kubelet[3365]: E0123 23:52:32.967347 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.968221 kubelet[3365]: E0123 23:52:32.968126 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.968221 kubelet[3365]: W0123 23:52:32.968140 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.968221 kubelet[3365]: E0123 23:52:32.968152 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.969386 kubelet[3365]: E0123 23:52:32.969230 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.969386 kubelet[3365]: W0123 23:52:32.969245 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.969386 kubelet[3365]: E0123 23:52:32.969258 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.972073 kubelet[3365]: E0123 23:52:32.971954 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.972073 kubelet[3365]: W0123 23:52:32.971969 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.972073 kubelet[3365]: E0123 23:52:32.971982 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.972317 kubelet[3365]: E0123 23:52:32.972257 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.972317 kubelet[3365]: W0123 23:52:32.972268 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.972317 kubelet[3365]: E0123 23:52:32.972278 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.972662 kubelet[3365]: E0123 23:52:32.972568 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.972662 kubelet[3365]: W0123 23:52:32.972583 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.972662 kubelet[3365]: E0123 23:52:32.972594 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.973478 kubelet[3365]: E0123 23:52:32.973410 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.973478 kubelet[3365]: W0123 23:52:32.973424 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.973478 kubelet[3365]: E0123 23:52:32.973436 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.975164 kubelet[3365]: E0123 23:52:32.975064 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.975164 kubelet[3365]: W0123 23:52:32.975078 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.975164 kubelet[3365]: E0123 23:52:32.975091 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.975407 kubelet[3365]: E0123 23:52:32.975347 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.975407 kubelet[3365]: W0123 23:52:32.975359 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.975407 kubelet[3365]: E0123 23:52:32.975369 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.977954 kubelet[3365]: E0123 23:52:32.977724 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.977954 kubelet[3365]: W0123 23:52:32.977739 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.977954 kubelet[3365]: E0123 23:52:32.977751 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.981065 kubelet[3365]: E0123 23:52:32.980945 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.981065 kubelet[3365]: W0123 23:52:32.980960 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.981065 kubelet[3365]: E0123 23:52:32.980973 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.981065 kubelet[3365]: I0123 23:52:32.981007 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d85237ab-62c3-4029-9724-6c41efba9b29-socket-dir\") pod \"csi-node-driver-zg499\" (UID: \"d85237ab-62c3-4029-9724-6c41efba9b29\") " pod="calico-system/csi-node-driver-zg499" Jan 23 23:52:32.982583 kubelet[3365]: E0123 23:52:32.982233 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.982583 kubelet[3365]: W0123 23:52:32.982251 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.982583 kubelet[3365]: E0123 23:52:32.982272 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.982583 kubelet[3365]: I0123 23:52:32.982292 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d85237ab-62c3-4029-9724-6c41efba9b29-registration-dir\") pod \"csi-node-driver-zg499\" (UID: \"d85237ab-62c3-4029-9724-6c41efba9b29\") " pod="calico-system/csi-node-driver-zg499" Jan 23 23:52:32.983699 kubelet[3365]: E0123 23:52:32.983414 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.983699 kubelet[3365]: W0123 23:52:32.983431 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.983699 kubelet[3365]: E0123 23:52:32.983472 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.983699 kubelet[3365]: I0123 23:52:32.983507 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d85237ab-62c3-4029-9724-6c41efba9b29-varrun\") pod \"csi-node-driver-zg499\" (UID: \"d85237ab-62c3-4029-9724-6c41efba9b29\") " pod="calico-system/csi-node-driver-zg499" Jan 23 23:52:32.984684 kubelet[3365]: E0123 23:52:32.984573 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.984684 kubelet[3365]: W0123 23:52:32.984588 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.984684 kubelet[3365]: E0123 23:52:32.984625 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.986108 kubelet[3365]: E0123 23:52:32.985639 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.986108 kubelet[3365]: W0123 23:52:32.985653 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.986108 kubelet[3365]: E0123 23:52:32.985688 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.986712 kubelet[3365]: E0123 23:52:32.986538 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.986712 kubelet[3365]: W0123 23:52:32.986551 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.986712 kubelet[3365]: E0123 23:52:32.986597 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.986712 kubelet[3365]: I0123 23:52:32.986635 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d85237ab-62c3-4029-9724-6c41efba9b29-kubelet-dir\") pod \"csi-node-driver-zg499\" (UID: \"d85237ab-62c3-4029-9724-6c41efba9b29\") " pod="calico-system/csi-node-driver-zg499" Jan 23 23:52:32.987282 kubelet[3365]: E0123 23:52:32.987075 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.987282 kubelet[3365]: W0123 23:52:32.987089 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.987820 kubelet[3365]: E0123 23:52:32.987480 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.987820 kubelet[3365]: E0123 23:52:32.987555 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.987820 kubelet[3365]: W0123 23:52:32.987570 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.987820 kubelet[3365]: E0123 23:52:32.987585 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.992181 kubelet[3365]: E0123 23:52:32.991329 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.992181 kubelet[3365]: W0123 23:52:32.991349 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.992181 kubelet[3365]: E0123 23:52:32.991367 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.992181 kubelet[3365]: I0123 23:52:32.991392 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kltkr\" (UniqueName: \"kubernetes.io/projected/d85237ab-62c3-4029-9724-6c41efba9b29-kube-api-access-kltkr\") pod \"csi-node-driver-zg499\" (UID: \"d85237ab-62c3-4029-9724-6c41efba9b29\") " pod="calico-system/csi-node-driver-zg499" Jan 23 23:52:32.992955 kubelet[3365]: E0123 23:52:32.992445 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.992955 kubelet[3365]: W0123 23:52:32.992468 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.992955 kubelet[3365]: E0123 23:52:32.992484 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.994143 kubelet[3365]: E0123 23:52:32.993608 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.994143 kubelet[3365]: W0123 23:52:32.993625 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.994143 kubelet[3365]: E0123 23:52:32.993695 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.995481 kubelet[3365]: E0123 23:52:32.994379 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.995481 kubelet[3365]: W0123 23:52:32.994393 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.995481 kubelet[3365]: E0123 23:52:32.994407 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.995481 kubelet[3365]: E0123 23:52:32.994686 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.995481 kubelet[3365]: W0123 23:52:32.994695 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.995481 kubelet[3365]: E0123 23:52:32.994708 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.995481 kubelet[3365]: E0123 23:52:32.995144 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.995481 kubelet[3365]: W0123 23:52:32.995154 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.995481 kubelet[3365]: E0123 23:52:32.995166 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.995481 kubelet[3365]: E0123 23:52:32.995387 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:32.996586 kubelet[3365]: W0123 23:52:32.995396 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:32.996586 kubelet[3365]: E0123 23:52:32.995407 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:32.999761 containerd[1831]: time="2026-01-23T23:52:32.999628056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kcvvr,Uid:a4c79aae-e3dd-4093-b8bd-4d07f2444d04,Namespace:calico-system,Attempt:0,}" Jan 23 23:52:33.049967 containerd[1831]: time="2026-01-23T23:52:33.049777552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-677899d665-grg4g,Uid:f78c958d-b065-4e40-9155-1bb1c86fbc60,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfd25deabb83aed5277d2b4056e1f2381fecf691314459f5893be9f3cdc7a765\"" Jan 23 23:52:33.055123 containerd[1831]: time="2026-01-23T23:52:33.055051566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 23:52:33.066125 containerd[1831]: time="2026-01-23T23:52:33.065258194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:33.066125 containerd[1831]: time="2026-01-23T23:52:33.065413755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:33.066125 containerd[1831]: time="2026-01-23T23:52:33.065435315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:33.066125 containerd[1831]: time="2026-01-23T23:52:33.065930516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:33.095115 kubelet[3365]: E0123 23:52:33.094938 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.095115 kubelet[3365]: W0123 23:52:33.094968 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.095115 kubelet[3365]: E0123 23:52:33.094990 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.095776 kubelet[3365]: E0123 23:52:33.095513 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.095776 kubelet[3365]: W0123 23:52:33.095527 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.095776 kubelet[3365]: E0123 23:52:33.095724 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.096987 kubelet[3365]: E0123 23:52:33.096891 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.096987 kubelet[3365]: W0123 23:52:33.096904 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.096987 kubelet[3365]: E0123 23:52:33.096929 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.097732 kubelet[3365]: E0123 23:52:33.097171 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.097732 kubelet[3365]: W0123 23:52:33.097190 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.097732 kubelet[3365]: E0123 23:52:33.097210 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.098359 kubelet[3365]: E0123 23:52:33.098082 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.098359 kubelet[3365]: W0123 23:52:33.098097 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.098867 kubelet[3365]: E0123 23:52:33.098839 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.099021 kubelet[3365]: E0123 23:52:33.098890 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.099500 kubelet[3365]: W0123 23:52:33.099074 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.099500 kubelet[3365]: E0123 23:52:33.099471 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.100776 kubelet[3365]: E0123 23:52:33.100523 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.100776 kubelet[3365]: W0123 23:52:33.100540 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.101032 kubelet[3365]: E0123 23:52:33.100922 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.101032 kubelet[3365]: E0123 23:52:33.100960 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.101306 kubelet[3365]: W0123 23:52:33.100972 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.101446 kubelet[3365]: E0123 23:52:33.101417 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.102033 kubelet[3365]: E0123 23:52:33.101823 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.102033 kubelet[3365]: W0123 23:52:33.101839 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.102383 kubelet[3365]: E0123 23:52:33.102239 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.103022 kubelet[3365]: E0123 23:52:33.102721 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.103022 kubelet[3365]: W0123 23:52:33.102735 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.103022 kubelet[3365]: E0123 23:52:33.102775 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.103578 kubelet[3365]: E0123 23:52:33.103479 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.103578 kubelet[3365]: W0123 23:52:33.103497 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.103578 kubelet[3365]: E0123 23:52:33.103542 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.104532 kubelet[3365]: E0123 23:52:33.104256 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.104532 kubelet[3365]: W0123 23:52:33.104273 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.105906 kubelet[3365]: E0123 23:52:33.105279 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.105906 kubelet[3365]: W0123 23:52:33.105303 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.106779 kubelet[3365]: E0123 23:52:33.106756 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.106951 kubelet[3365]: E0123 23:52:33.106933 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.107105 kubelet[3365]: W0123 23:52:33.107090 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.107664 kubelet[3365]: E0123 23:52:33.107566 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.107664 kubelet[3365]: E0123 23:52:33.107593 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.108073 kubelet[3365]: E0123 23:52:33.107745 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.108073 kubelet[3365]: W0123 23:52:33.107757 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.108073 kubelet[3365]: E0123 23:52:33.107945 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.108848 kubelet[3365]: E0123 23:52:33.108565 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.108848 kubelet[3365]: W0123 23:52:33.108582 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.108848 kubelet[3365]: E0123 23:52:33.108610 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.110120 kubelet[3365]: E0123 23:52:33.110099 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.110217 kubelet[3365]: W0123 23:52:33.110203 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.110363 kubelet[3365]: E0123 23:52:33.110342 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.110482 kubelet[3365]: E0123 23:52:33.110470 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.110552 kubelet[3365]: W0123 23:52:33.110541 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.110621 kubelet[3365]: E0123 23:52:33.110608 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.111885 kubelet[3365]: E0123 23:52:33.110948 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.111885 kubelet[3365]: W0123 23:52:33.110963 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.111885 kubelet[3365]: E0123 23:52:33.110993 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.112109 kubelet[3365]: E0123 23:52:33.112083 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.112178 kubelet[3365]: W0123 23:52:33.112159 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.112259 kubelet[3365]: E0123 23:52:33.112239 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.112496 kubelet[3365]: E0123 23:52:33.112479 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.112586 kubelet[3365]: W0123 23:52:33.112572 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.112686 kubelet[3365]: E0123 23:52:33.112670 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.113007 kubelet[3365]: E0123 23:52:33.112989 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.113234 kubelet[3365]: W0123 23:52:33.113213 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.113359 kubelet[3365]: E0123 23:52:33.113336 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.113883 kubelet[3365]: E0123 23:52:33.113648 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.113883 kubelet[3365]: W0123 23:52:33.113665 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.113883 kubelet[3365]: E0123 23:52:33.113696 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.116003 kubelet[3365]: E0123 23:52:33.115978 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.116169 kubelet[3365]: W0123 23:52:33.116088 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.116169 kubelet[3365]: E0123 23:52:33.116127 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.116489 kubelet[3365]: E0123 23:52:33.116416 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.116489 kubelet[3365]: W0123 23:52:33.116438 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.116489 kubelet[3365]: E0123 23:52:33.116451 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:33.126832 containerd[1831]: time="2026-01-23T23:52:33.126698961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kcvvr,Uid:a4c79aae-e3dd-4093-b8bd-4d07f2444d04,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b023b70406a4f8a53660e69df7c80eca3317928da93b48575b27ca47bc80bd3\"" Jan 23 23:52:33.134279 kubelet[3365]: E0123 23:52:33.134242 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:33.134279 kubelet[3365]: W0123 23:52:33.134263 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:33.134279 kubelet[3365]: E0123 23:52:33.134284 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:34.414934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833489037.mount: Deactivated successfully. Jan 23 23:52:34.817601 kubelet[3365]: E0123 23:52:34.817564 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:52:35.266525 containerd[1831]: time="2026-01-23T23:52:35.266480345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:35.269407 containerd[1831]: time="2026-01-23T23:52:35.269374633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 23:52:35.272706 containerd[1831]: time="2026-01-23T23:52:35.272659202Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:35.280085 containerd[1831]: time="2026-01-23T23:52:35.279799262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:35.280538 containerd[1831]: time="2026-01-23T23:52:35.280509624Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.225394097s" Jan 23 23:52:35.280602 containerd[1831]: time="2026-01-23T23:52:35.280548624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 23:52:35.281792 containerd[1831]: time="2026-01-23T23:52:35.281766467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:52:35.295665 containerd[1831]: time="2026-01-23T23:52:35.295513184Z" level=info msg="CreateContainer within sandbox \"dfd25deabb83aed5277d2b4056e1f2381fecf691314459f5893be9f3cdc7a765\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 23:52:35.338188 containerd[1831]: time="2026-01-23T23:52:35.337905700Z" level=info msg="CreateContainer within sandbox \"dfd25deabb83aed5277d2b4056e1f2381fecf691314459f5893be9f3cdc7a765\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c67fd29ae43f8088e7c65aab4ee1b58e654b39aa7fc8ef9ccce3776ebc5a509e\"" Jan 23 23:52:35.340408 containerd[1831]: time="2026-01-23T23:52:35.339278424Z" level=info msg="StartContainer for \"c67fd29ae43f8088e7c65aab4ee1b58e654b39aa7fc8ef9ccce3776ebc5a509e\"" Jan 23 23:52:35.409135 containerd[1831]: time="2026-01-23T23:52:35.409023293Z" level=info msg="StartContainer for \"c67fd29ae43f8088e7c65aab4ee1b58e654b39aa7fc8ef9ccce3776ebc5a509e\" returns successfully" Jan 23 23:52:35.931454 kubelet[3365]: I0123 23:52:35.931320 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-677899d665-grg4g" podStartSLOduration=1.7042366389999999 podStartE2EDuration="3.93115162s" podCreationTimestamp="2026-01-23 23:52:32 +0000 UTC" firstStartedPulling="2026-01-23 23:52:33.054757726 +0000 UTC m=+26.343618412" lastFinishedPulling="2026-01-23 23:52:35.281672707 +0000 UTC m=+28.570533393" observedRunningTime="2026-01-23 23:52:35.929850057 +0000 UTC m=+29.218710703" watchObservedRunningTime="2026-01-23 23:52:35.93115162 +0000 UTC m=+29.220012346" Jan 23 23:52:35.999511 kubelet[3365]: E0123 23:52:35.999083 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:35.999511 kubelet[3365]: W0123 23:52:35.999104 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:35.999511 kubelet[3365]: E0123 23:52:35.999124 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:35.999511 kubelet[3365]: E0123 23:52:35.999256 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:35.999511 kubelet[3365]: W0123 23:52:35.999264 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:35.999511 kubelet[3365]: E0123 23:52:35.999307 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:35.999511 kubelet[3365]: E0123 23:52:35.999428 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:35.999511 kubelet[3365]: W0123 23:52:35.999435 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:35.999511 kubelet[3365]: E0123 23:52:35.999443 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.000110 kubelet[3365]: E0123 23:52:35.999916 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.000110 kubelet[3365]: W0123 23:52:35.999933 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.000110 kubelet[3365]: E0123 23:52:35.999946 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.000295 kubelet[3365]: E0123 23:52:36.000271 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.000295 kubelet[3365]: W0123 23:52:36.000290 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.000353 kubelet[3365]: E0123 23:52:36.000302 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.000480 kubelet[3365]: E0123 23:52:36.000466 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.000506 kubelet[3365]: W0123 23:52:36.000479 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.000506 kubelet[3365]: E0123 23:52:36.000487 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.000639 kubelet[3365]: E0123 23:52:36.000626 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.000665 kubelet[3365]: W0123 23:52:36.000638 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.000665 kubelet[3365]: E0123 23:52:36.000647 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.000808 kubelet[3365]: E0123 23:52:36.000795 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.000833 kubelet[3365]: W0123 23:52:36.000807 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.000833 kubelet[3365]: E0123 23:52:36.000821 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.001028 kubelet[3365]: E0123 23:52:36.001010 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.001028 kubelet[3365]: W0123 23:52:36.001025 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.001107 kubelet[3365]: E0123 23:52:36.001035 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.001192 kubelet[3365]: E0123 23:52:36.001176 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.001192 kubelet[3365]: W0123 23:52:36.001189 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.001244 kubelet[3365]: E0123 23:52:36.001200 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.001359 kubelet[3365]: E0123 23:52:36.001346 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.001359 kubelet[3365]: W0123 23:52:36.001357 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.001413 kubelet[3365]: E0123 23:52:36.001365 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.001522 kubelet[3365]: E0123 23:52:36.001509 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.001546 kubelet[3365]: W0123 23:52:36.001527 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.001546 kubelet[3365]: E0123 23:52:36.001536 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.001716 kubelet[3365]: E0123 23:52:36.001703 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.001716 kubelet[3365]: W0123 23:52:36.001714 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.001786 kubelet[3365]: E0123 23:52:36.001722 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.001924 kubelet[3365]: E0123 23:52:36.001909 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.001924 kubelet[3365]: W0123 23:52:36.001920 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.001990 kubelet[3365]: E0123 23:52:36.001930 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.002108 kubelet[3365]: E0123 23:52:36.002095 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.002108 kubelet[3365]: W0123 23:52:36.002107 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.002187 kubelet[3365]: E0123 23:52:36.002116 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.025505 kubelet[3365]: E0123 23:52:36.025477 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.025505 kubelet[3365]: W0123 23:52:36.025500 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.025715 kubelet[3365]: E0123 23:52:36.025694 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.026656 kubelet[3365]: E0123 23:52:36.026637 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.026656 kubelet[3365]: W0123 23:52:36.026657 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.026740 kubelet[3365]: E0123 23:52:36.026675 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.027891 kubelet[3365]: E0123 23:52:36.027692 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.027891 kubelet[3365]: W0123 23:52:36.027709 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.027891 kubelet[3365]: E0123 23:52:36.027728 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.028297 kubelet[3365]: E0123 23:52:36.028156 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.028505 kubelet[3365]: W0123 23:52:36.028380 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.028505 kubelet[3365]: E0123 23:52:36.028409 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.028806 kubelet[3365]: E0123 23:52:36.028725 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.028806 kubelet[3365]: W0123 23:52:36.028739 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.028888 kubelet[3365]: E0123 23:52:36.028816 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.029105 kubelet[3365]: E0123 23:52:36.029026 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.029105 kubelet[3365]: W0123 23:52:36.029038 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.029376 kubelet[3365]: E0123 23:52:36.029276 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.029376 kubelet[3365]: W0123 23:52:36.029289 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.029376 kubelet[3365]: E0123 23:52:36.029329 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.029508 kubelet[3365]: E0123 23:52:36.029388 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.029990 kubelet[3365]: E0123 23:52:36.029631 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.029990 kubelet[3365]: W0123 23:52:36.029645 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.029990 kubelet[3365]: E0123 23:52:36.029662 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.030438 kubelet[3365]: E0123 23:52:36.030417 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.030438 kubelet[3365]: W0123 23:52:36.030436 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.030752 kubelet[3365]: E0123 23:52:36.030730 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.031021 kubelet[3365]: E0123 23:52:36.031006 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.031054 kubelet[3365]: W0123 23:52:36.031020 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.031132 kubelet[3365]: E0123 23:52:36.031116 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.031243 kubelet[3365]: E0123 23:52:36.031230 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.031243 kubelet[3365]: W0123 23:52:36.031241 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.031330 kubelet[3365]: E0123 23:52:36.031309 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.031416 kubelet[3365]: E0123 23:52:36.031404 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.031452 kubelet[3365]: W0123 23:52:36.031440 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.031516 kubelet[3365]: E0123 23:52:36.031501 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.031703 kubelet[3365]: E0123 23:52:36.031689 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.031741 kubelet[3365]: W0123 23:52:36.031711 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.031741 kubelet[3365]: E0123 23:52:36.031727 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.031981 kubelet[3365]: E0123 23:52:36.031966 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.031981 kubelet[3365]: W0123 23:52:36.031980 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.032051 kubelet[3365]: E0123 23:52:36.031993 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.032303 kubelet[3365]: E0123 23:52:36.032287 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.032303 kubelet[3365]: W0123 23:52:36.032300 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.032366 kubelet[3365]: E0123 23:52:36.032312 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.032492 kubelet[3365]: E0123 23:52:36.032476 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.032492 kubelet[3365]: W0123 23:52:36.032490 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.032545 kubelet[3365]: E0123 23:52:36.032500 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.032684 kubelet[3365]: E0123 23:52:36.032671 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.032684 kubelet[3365]: W0123 23:52:36.032682 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.032740 kubelet[3365]: E0123 23:52:36.032690 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.033022 kubelet[3365]: E0123 23:52:36.033007 3365 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:52:36.033022 kubelet[3365]: W0123 23:52:36.033020 3365 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:52:36.033092 kubelet[3365]: E0123 23:52:36.033031 3365 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:52:36.620141 containerd[1831]: time="2026-01-23T23:52:36.620069669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:36.623304 containerd[1831]: time="2026-01-23T23:52:36.623252678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 23:52:36.626871 containerd[1831]: time="2026-01-23T23:52:36.626811248Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:36.631365 containerd[1831]: time="2026-01-23T23:52:36.631272620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:36.632499 containerd[1831]: time="2026-01-23T23:52:36.632393103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.350595036s" Jan 23 23:52:36.632499 containerd[1831]: time="2026-01-23T23:52:36.632424263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:52:36.634779 containerd[1831]: time="2026-01-23T23:52:36.634559989Z" level=info msg="CreateContainer within sandbox \"7b023b70406a4f8a53660e69df7c80eca3317928da93b48575b27ca47bc80bd3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:52:36.671764 containerd[1831]: time="2026-01-23T23:52:36.671725133Z" level=info msg="CreateContainer within sandbox \"7b023b70406a4f8a53660e69df7c80eca3317928da93b48575b27ca47bc80bd3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f969ba8a0ef11d8569e235c1b78a1a813e4264799cece3d7eae44694a72d6603\"" Jan 23 23:52:36.672507 containerd[1831]: time="2026-01-23T23:52:36.672332975Z" level=info msg="StartContainer for \"f969ba8a0ef11d8569e235c1b78a1a813e4264799cece3d7eae44694a72d6603\"" Jan 23 23:52:36.729189 containerd[1831]: time="2026-01-23T23:52:36.729104454Z" level=info msg="StartContainer for \"f969ba8a0ef11d8569e235c1b78a1a813e4264799cece3d7eae44694a72d6603\" returns successfully" Jan 23 23:52:36.756461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f969ba8a0ef11d8569e235c1b78a1a813e4264799cece3d7eae44694a72d6603-rootfs.mount: Deactivated successfully. Jan 23 23:52:36.816514 kubelet[3365]: E0123 23:52:36.816255 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:52:36.913210 kubelet[3365]: I0123 23:52:36.913114 3365 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:52:37.000970 containerd[1831]: time="2026-01-23T23:52:37.000930855Z" level=error msg="collecting metrics for f969ba8a0ef11d8569e235c1b78a1a813e4264799cece3d7eae44694a72d6603" error="cgroups: cgroup deleted: unknown" Jan 23 23:52:37.797533 containerd[1831]: time="2026-01-23T23:52:37.797465684Z" level=info msg="shim disconnected" id=f969ba8a0ef11d8569e235c1b78a1a813e4264799cece3d7eae44694a72d6603 namespace=k8s.io Jan 23 23:52:37.798174 containerd[1831]: time="2026-01-23T23:52:37.798012926Z" level=warning msg="cleaning up after shim disconnected" id=f969ba8a0ef11d8569e235c1b78a1a813e4264799cece3d7eae44694a72d6603 namespace=k8s.io Jan 23 23:52:37.798174 containerd[1831]: time="2026-01-23T23:52:37.798035566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:52:37.917189 containerd[1831]: time="2026-01-23T23:52:37.917143739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:52:38.816682 kubelet[3365]: E0123 23:52:38.816443 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:52:40.816295 kubelet[3365]: E0123 23:52:40.815957 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:52:41.119868 containerd[1831]: time="2026-01-23T23:52:41.119732424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:41.122991 containerd[1831]: time="2026-01-23T23:52:41.122829232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:52:41.126406 containerd[1831]: time="2026-01-23T23:52:41.126118442Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:41.130518 containerd[1831]: time="2026-01-23T23:52:41.130488934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:41.131259 containerd[1831]: time="2026-01-23T23:52:41.131227536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.214045597s" Jan 23 23:52:41.131259 containerd[1831]: time="2026-01-23T23:52:41.131257656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:52:41.135349 containerd[1831]: time="2026-01-23T23:52:41.135316347Z" level=info msg="CreateContainer within sandbox \"7b023b70406a4f8a53660e69df7c80eca3317928da93b48575b27ca47bc80bd3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:52:41.184975 containerd[1831]: time="2026-01-23T23:52:41.184932286Z" level=info msg="CreateContainer within sandbox \"7b023b70406a4f8a53660e69df7c80eca3317928da93b48575b27ca47bc80bd3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d087e54d43b174e8bd523be1a2709e64cca7138666e5b55e9a59d6a39f0024ad\"" Jan 23 23:52:41.187824 containerd[1831]: time="2026-01-23T23:52:41.185398488Z" level=info msg="StartContainer for \"d087e54d43b174e8bd523be1a2709e64cca7138666e5b55e9a59d6a39f0024ad\"" Jan 23 23:52:41.238781 containerd[1831]: time="2026-01-23T23:52:41.238736477Z" level=info msg="StartContainer for \"d087e54d43b174e8bd523be1a2709e64cca7138666e5b55e9a59d6a39f0024ad\" returns successfully" Jan 23 23:52:42.421633 containerd[1831]: time="2026-01-23T23:52:42.421512228Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:52:42.441357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d087e54d43b174e8bd523be1a2709e64cca7138666e5b55e9a59d6a39f0024ad-rootfs.mount: Deactivated successfully. Jan 23 23:52:42.491410 kubelet[3365]: I0123 23:52:42.491128 3365 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:52:42.580648 kubelet[3365]: I0123 23:52:42.580123 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab0fffca-fdb6-48fb-890d-1befd8d9f70b-config-volume\") pod \"coredns-668d6bf9bc-2mmfw\" (UID: \"ab0fffca-fdb6-48fb-890d-1befd8d9f70b\") " pod="kube-system/coredns-668d6bf9bc-2mmfw" Jan 23 23:52:42.580648 kubelet[3365]: I0123 23:52:42.580162 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gzzf\" (UniqueName: \"kubernetes.io/projected/d4b92f62-d0ce-4074-b14e-99f94c7e34c5-kube-api-access-2gzzf\") pod \"coredns-668d6bf9bc-d2ck2\" (UID: \"d4b92f62-d0ce-4074-b14e-99f94c7e34c5\") " pod="kube-system/coredns-668d6bf9bc-d2ck2" Jan 23 23:52:42.580648 kubelet[3365]: I0123 23:52:42.580190 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsd9j\" (UniqueName: \"kubernetes.io/projected/3e54eef2-74de-4252-afd0-c8cf40739108-kube-api-access-gsd9j\") pod \"whisker-7c5ccfd59c-h6qxv\" (UID: \"3e54eef2-74de-4252-afd0-c8cf40739108\") " pod="calico-system/whisker-7c5ccfd59c-h6qxv" Jan 23 23:52:42.580648 kubelet[3365]: I0123 23:52:42.580213 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/86223f84-2792-4d18-8124-56ab2f35f54f-calico-apiserver-certs\") pod \"calico-apiserver-6c8fbd4d54-j9z4b\" (UID: \"86223f84-2792-4d18-8124-56ab2f35f54f\") " pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" Jan 23 23:52:42.580648 kubelet[3365]: I0123 23:52:42.580235 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxlv\" (UniqueName: \"kubernetes.io/projected/ab0fffca-fdb6-48fb-890d-1befd8d9f70b-kube-api-access-nkxlv\") pod \"coredns-668d6bf9bc-2mmfw\" (UID: \"ab0fffca-fdb6-48fb-890d-1befd8d9f70b\") " pod="kube-system/coredns-668d6bf9bc-2mmfw" Jan 23 23:52:42.580891 kubelet[3365]: I0123 23:52:42.580259 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e54eef2-74de-4252-afd0-c8cf40739108-whisker-ca-bundle\") pod \"whisker-7c5ccfd59c-h6qxv\" (UID: \"3e54eef2-74de-4252-afd0-c8cf40739108\") " pod="calico-system/whisker-7c5ccfd59c-h6qxv" Jan 23 23:52:42.580891 kubelet[3365]: I0123 23:52:42.580277 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgwjg\" (UniqueName: \"kubernetes.io/projected/86223f84-2792-4d18-8124-56ab2f35f54f-kube-api-access-lgwjg\") pod \"calico-apiserver-6c8fbd4d54-j9z4b\" (UID: \"86223f84-2792-4d18-8124-56ab2f35f54f\") " pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" Jan 23 23:52:42.580891 kubelet[3365]: I0123 23:52:42.580297 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4b92f62-d0ce-4074-b14e-99f94c7e34c5-config-volume\") pod \"coredns-668d6bf9bc-d2ck2\" (UID: \"d4b92f62-d0ce-4074-b14e-99f94c7e34c5\") " pod="kube-system/coredns-668d6bf9bc-d2ck2" Jan 23 23:52:42.580891 kubelet[3365]: I0123 23:52:42.580320 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e54eef2-74de-4252-afd0-c8cf40739108-whisker-backend-key-pair\") pod \"whisker-7c5ccfd59c-h6qxv\" (UID: \"3e54eef2-74de-4252-afd0-c8cf40739108\") " pod="calico-system/whisker-7c5ccfd59c-h6qxv" Jan 23 23:52:43.254357 kubelet[3365]: I0123 23:52:42.680840 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e432e42b-559a-473b-8e55-fe59b8af82e5-calico-apiserver-certs\") pod \"calico-apiserver-6c8fbd4d54-2csvx\" (UID: \"e432e42b-559a-473b-8e55-fe59b8af82e5\") " pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" Jan 23 23:52:43.254357 kubelet[3365]: I0123 23:52:42.680937 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nffqr\" (UniqueName: \"kubernetes.io/projected/e432e42b-559a-473b-8e55-fe59b8af82e5-kube-api-access-nffqr\") pod \"calico-apiserver-6c8fbd4d54-2csvx\" (UID: \"e432e42b-559a-473b-8e55-fe59b8af82e5\") " pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" Jan 23 23:52:43.254357 kubelet[3365]: I0123 23:52:42.680956 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24-tigera-ca-bundle\") pod \"calico-kube-controllers-6bfff8d8c9-qd78x\" (UID: \"4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24\") " pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" Jan 23 23:52:43.254357 kubelet[3365]: I0123 23:52:42.681017 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbdp\" (UniqueName: \"kubernetes.io/projected/4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24-kube-api-access-sfbdp\") pod \"calico-kube-controllers-6bfff8d8c9-qd78x\" (UID: \"4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24\") " pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" Jan 23 23:52:43.254357 kubelet[3365]: I0123 23:52:42.681040 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e6b6e508-b275-4ee9-aa24-d58a31eb441c-goldmane-key-pair\") pod \"goldmane-666569f655-mj4bl\" (UID: \"e6b6e508-b275-4ee9-aa24-d58a31eb441c\") " pod="calico-system/goldmane-666569f655-mj4bl" Jan 23 23:52:43.256313 kubelet[3365]: I0123 23:52:42.681055 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlpxb\" (UniqueName: \"kubernetes.io/projected/e6b6e508-b275-4ee9-aa24-d58a31eb441c-kube-api-access-wlpxb\") pod \"goldmane-666569f655-mj4bl\" (UID: \"e6b6e508-b275-4ee9-aa24-d58a31eb441c\") " pod="calico-system/goldmane-666569f655-mj4bl" Jan 23 23:52:43.256313 kubelet[3365]: I0123 23:52:42.681076 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6b6e508-b275-4ee9-aa24-d58a31eb441c-config\") pod \"goldmane-666569f655-mj4bl\" (UID: \"e6b6e508-b275-4ee9-aa24-d58a31eb441c\") " pod="calico-system/goldmane-666569f655-mj4bl" Jan 23 23:52:43.256313 kubelet[3365]: I0123 23:52:42.681095 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6b6e508-b275-4ee9-aa24-d58a31eb441c-goldmane-ca-bundle\") pod \"goldmane-666569f655-mj4bl\" (UID: \"e6b6e508-b275-4ee9-aa24-d58a31eb441c\") " pod="calico-system/goldmane-666569f655-mj4bl" Jan 23 23:52:43.286732 containerd[1831]: time="2026-01-23T23:52:43.286332168Z" level=info msg="shim disconnected" id=d087e54d43b174e8bd523be1a2709e64cca7138666e5b55e9a59d6a39f0024ad namespace=k8s.io Jan 23 23:52:43.286732 containerd[1831]: time="2026-01-23T23:52:43.286389489Z" level=warning msg="cleaning up after shim disconnected" id=d087e54d43b174e8bd523be1a2709e64cca7138666e5b55e9a59d6a39f0024ad namespace=k8s.io Jan 23 23:52:43.286732 containerd[1831]: time="2026-01-23T23:52:43.286397609Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:52:43.293259 containerd[1831]: time="2026-01-23T23:52:43.292555986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zg499,Uid:d85237ab-62c3-4029-9724-6c41efba9b29,Namespace:calico-system,Attempt:0,}" Jan 23 23:52:43.369278 containerd[1831]: time="2026-01-23T23:52:43.369228080Z" level=error msg="Failed to destroy network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.369595 containerd[1831]: time="2026-01-23T23:52:43.369558561Z" level=error msg="encountered an error cleaning up failed sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.369641 containerd[1831]: time="2026-01-23T23:52:43.369619002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zg499,Uid:d85237ab-62c3-4029-9724-6c41efba9b29,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.369874 kubelet[3365]: E0123 23:52:43.369824 3365 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.370185 kubelet[3365]: E0123 23:52:43.370023 3365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zg499" Jan 23 23:52:43.370185 kubelet[3365]: E0123 23:52:43.370060 3365 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zg499" Jan 23 23:52:43.370185 kubelet[3365]: E0123 23:52:43.370106 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:52:43.432264 containerd[1831]: time="2026-01-23T23:52:43.431895296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mmfw,Uid:ab0fffca-fdb6-48fb-890d-1befd8d9f70b,Namespace:kube-system,Attempt:0,}" Jan 23 23:52:43.440897 containerd[1831]: time="2026-01-23T23:52:43.439499277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c5ccfd59c-h6qxv,Uid:3e54eef2-74de-4252-afd0-c8cf40739108,Namespace:calico-system,Attempt:0,}" Jan 23 23:52:43.453874 containerd[1831]: time="2026-01-23T23:52:43.453523036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8fbd4d54-j9z4b,Uid:86223f84-2792-4d18-8124-56ab2f35f54f,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:52:43.453874 containerd[1831]: time="2026-01-23T23:52:43.453746797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2ck2,Uid:d4b92f62-d0ce-4074-b14e-99f94c7e34c5,Namespace:kube-system,Attempt:0,}" Jan 23 23:52:43.456840 containerd[1831]: time="2026-01-23T23:52:43.456674405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mj4bl,Uid:e6b6e508-b275-4ee9-aa24-d58a31eb441c,Namespace:calico-system,Attempt:0,}" Jan 23 23:52:43.469991 containerd[1831]: time="2026-01-23T23:52:43.469541561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bfff8d8c9-qd78x,Uid:4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24,Namespace:calico-system,Attempt:0,}" Jan 23 23:52:43.469991 containerd[1831]: time="2026-01-23T23:52:43.469753562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8fbd4d54-2csvx,Uid:e432e42b-559a-473b-8e55-fe59b8af82e5,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:52:43.626126 containerd[1831]: time="2026-01-23T23:52:43.625968036Z" level=error msg="Failed to destroy network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.626891 containerd[1831]: time="2026-01-23T23:52:43.626328196Z" level=error msg="encountered an error cleaning up failed sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.626891 containerd[1831]: time="2026-01-23T23:52:43.626378037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mmfw,Uid:ab0fffca-fdb6-48fb-890d-1befd8d9f70b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.627339 kubelet[3365]: E0123 23:52:43.627075 3365 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.627339 kubelet[3365]: E0123 23:52:43.627149 3365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2mmfw" Jan 23 23:52:43.627339 kubelet[3365]: E0123 23:52:43.627168 3365 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2mmfw" Jan 23 23:52:43.628549 kubelet[3365]: E0123 23:52:43.627212 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2mmfw_kube-system(ab0fffca-fdb6-48fb-890d-1befd8d9f70b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2mmfw_kube-system(ab0fffca-fdb6-48fb-890d-1befd8d9f70b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2mmfw" podUID="ab0fffca-fdb6-48fb-890d-1befd8d9f70b" Jan 23 23:52:43.661305 containerd[1831]: time="2026-01-23T23:52:43.661248048Z" level=error msg="Failed to destroy network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.661846 containerd[1831]: time="2026-01-23T23:52:43.661714049Z" level=error msg="encountered an error cleaning up failed sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.661846 containerd[1831]: time="2026-01-23T23:52:43.661762209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8fbd4d54-j9z4b,Uid:86223f84-2792-4d18-8124-56ab2f35f54f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.662359 kubelet[3365]: E0123 23:52:43.662144 3365 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.662359 kubelet[3365]: E0123 23:52:43.662208 3365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" Jan 23 23:52:43.662359 kubelet[3365]: E0123 23:52:43.662231 3365 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" Jan 23 23:52:43.662614 kubelet[3365]: E0123 23:52:43.662279 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c8fbd4d54-j9z4b_calico-apiserver(86223f84-2792-4d18-8124-56ab2f35f54f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c8fbd4d54-j9z4b_calico-apiserver(86223f84-2792-4d18-8124-56ab2f35f54f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:52:43.689232 containerd[1831]: time="2026-01-23T23:52:43.689161880Z" level=error msg="Failed to destroy network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.690722 containerd[1831]: time="2026-01-23T23:52:43.690594684Z" level=error msg="encountered an error cleaning up failed sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.691172 containerd[1831]: time="2026-01-23T23:52:43.691129846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c5ccfd59c-h6qxv,Uid:3e54eef2-74de-4252-afd0-c8cf40739108,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.692159 kubelet[3365]: E0123 23:52:43.691952 3365 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.692159 kubelet[3365]: E0123 23:52:43.692101 3365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c5ccfd59c-h6qxv" Jan 23 23:52:43.692159 kubelet[3365]: E0123 23:52:43.692125 3365 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c5ccfd59c-h6qxv" Jan 23 23:52:43.692635 kubelet[3365]: E0123 23:52:43.692362 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c5ccfd59c-h6qxv_calico-system(3e54eef2-74de-4252-afd0-c8cf40739108)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c5ccfd59c-h6qxv_calico-system(3e54eef2-74de-4252-afd0-c8cf40739108)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c5ccfd59c-h6qxv" podUID="3e54eef2-74de-4252-afd0-c8cf40739108" Jan 23 23:52:43.761968 containerd[1831]: time="2026-01-23T23:52:43.761917350Z" level=error msg="Failed to destroy network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.763048 containerd[1831]: time="2026-01-23T23:52:43.762972233Z" level=error msg="encountered an error cleaning up failed sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.763048 containerd[1831]: time="2026-01-23T23:52:43.763034753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8fbd4d54-2csvx,Uid:e432e42b-559a-473b-8e55-fe59b8af82e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.763688 kubelet[3365]: E0123 23:52:43.763556 3365 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.763688 kubelet[3365]: E0123 23:52:43.763615 3365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" Jan 23 23:52:43.763688 kubelet[3365]: E0123 23:52:43.763636 3365 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" Jan 23 23:52:43.765399 kubelet[3365]: E0123 23:52:43.763897 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c8fbd4d54-2csvx_calico-apiserver(e432e42b-559a-473b-8e55-fe59b8af82e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c8fbd4d54-2csvx_calico-apiserver(e432e42b-559a-473b-8e55-fe59b8af82e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:52:43.771024 containerd[1831]: time="2026-01-23T23:52:43.770975614Z" level=error msg="Failed to destroy network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.771466 containerd[1831]: time="2026-01-23T23:52:43.771420055Z" level=error msg="Failed to destroy network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.771734 containerd[1831]: time="2026-01-23T23:52:43.771705696Z" level=error msg="encountered an error cleaning up failed sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.771886 containerd[1831]: time="2026-01-23T23:52:43.771853136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2ck2,Uid:d4b92f62-d0ce-4074-b14e-99f94c7e34c5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.773105 containerd[1831]: time="2026-01-23T23:52:43.771710696Z" level=error msg="encountered an error cleaning up failed sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.773351 kubelet[3365]: E0123 23:52:43.773322 3365 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.773480 kubelet[3365]: E0123 23:52:43.773461 3365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d2ck2" Jan 23 23:52:43.773580 kubelet[3365]: E0123 23:52:43.773564 3365 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d2ck2" Jan 23 23:52:43.773683 kubelet[3365]: E0123 23:52:43.773662 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d2ck2_kube-system(d4b92f62-d0ce-4074-b14e-99f94c7e34c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d2ck2_kube-system(d4b92f62-d0ce-4074-b14e-99f94c7e34c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d2ck2" podUID="d4b92f62-d0ce-4074-b14e-99f94c7e34c5" Jan 23 23:52:43.774097 containerd[1831]: time="2026-01-23T23:52:43.774041502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mj4bl,Uid:e6b6e508-b275-4ee9-aa24-d58a31eb441c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.774358 kubelet[3365]: E0123 23:52:43.774328 3365 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.774413 kubelet[3365]: E0123 23:52:43.774367 3365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mj4bl" Jan 23 23:52:43.774413 kubelet[3365]: E0123 23:52:43.774385 3365 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mj4bl" Jan 23 23:52:43.774467 kubelet[3365]: E0123 23:52:43.774411 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mj4bl_calico-system(e6b6e508-b275-4ee9-aa24-d58a31eb441c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mj4bl_calico-system(e6b6e508-b275-4ee9-aa24-d58a31eb441c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:52:43.776632 containerd[1831]: time="2026-01-23T23:52:43.776526868Z" level=error msg="Failed to destroy network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.777014 containerd[1831]: time="2026-01-23T23:52:43.776945309Z" level=error msg="encountered an error cleaning up failed sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.777014 containerd[1831]: time="2026-01-23T23:52:43.776980869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bfff8d8c9-qd78x,Uid:4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.777272 kubelet[3365]: E0123 23:52:43.777204 3365 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.777272 kubelet[3365]: E0123 23:52:43.777237 3365 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" Jan 23 23:52:43.777272 kubelet[3365]: E0123 23:52:43.777254 3365 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" Jan 23 23:52:43.777395 kubelet[3365]: E0123 23:52:43.777287 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bfff8d8c9-qd78x_calico-system(4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bfff8d8c9-qd78x_calico-system(4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:52:43.932538 kubelet[3365]: I0123 23:52:43.931269 3365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:52:43.934633 containerd[1831]: time="2026-01-23T23:52:43.934317640Z" level=info msg="StopPodSandbox for \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\"" Jan 23 23:52:43.934633 containerd[1831]: time="2026-01-23T23:52:43.934529121Z" level=info msg="Ensure that sandbox 939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe in task-service has been cleanup successfully" Jan 23 23:52:43.937245 kubelet[3365]: I0123 23:52:43.937221 3365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:52:43.938185 containerd[1831]: time="2026-01-23T23:52:43.938058210Z" level=info msg="StopPodSandbox for \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\"" Jan 23 23:52:43.938404 containerd[1831]: time="2026-01-23T23:52:43.938385371Z" level=info msg="Ensure that sandbox 04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004 in task-service has been cleanup successfully" Jan 23 23:52:43.945815 containerd[1831]: time="2026-01-23T23:52:43.945778350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:52:43.949576 kubelet[3365]: I0123 23:52:43.949500 3365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:52:43.950923 containerd[1831]: time="2026-01-23T23:52:43.950810803Z" level=info msg="StopPodSandbox for \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\"" Jan 23 23:52:43.951000 containerd[1831]: time="2026-01-23T23:52:43.950978683Z" level=info msg="Ensure that sandbox 903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18 in task-service has been cleanup successfully" Jan 23 23:52:43.961125 kubelet[3365]: I0123 23:52:43.961015 3365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:52:43.962867 containerd[1831]: time="2026-01-23T23:52:43.962660514Z" level=info msg="StopPodSandbox for \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\"" Jan 23 23:52:43.965619 containerd[1831]: time="2026-01-23T23:52:43.963875877Z" level=info msg="Ensure that sandbox d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27 in task-service has been cleanup successfully" Jan 23 23:52:43.977478 containerd[1831]: time="2026-01-23T23:52:43.977436752Z" level=error msg="StopPodSandbox for \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\" failed" error="failed to destroy network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:43.979435 kubelet[3365]: E0123 23:52:43.979396 3365 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:52:43.979535 kubelet[3365]: E0123 23:52:43.979464 3365 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004"} Jan 23 23:52:43.979535 kubelet[3365]: E0123 23:52:43.979522 3365 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e6b6e508-b275-4ee9-aa24-d58a31eb441c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:52:43.979624 kubelet[3365]: E0123 23:52:43.979551 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e6b6e508-b275-4ee9-aa24-d58a31eb441c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:52:43.982978 kubelet[3365]: I0123 23:52:43.982953 3365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:52:43.983742 containerd[1831]: time="2026-01-23T23:52:43.983706009Z" level=info msg="StopPodSandbox for \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\"" Jan 23 23:52:43.984937 containerd[1831]: time="2026-01-23T23:52:43.984898212Z" level=info msg="Ensure that sandbox 7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411 in task-service has been cleanup successfully" Jan 23 23:52:43.992822 kubelet[3365]: I0123 23:52:43.992776 3365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:52:43.995911 containerd[1831]: time="2026-01-23T23:52:43.995650080Z" level=info msg="StopPodSandbox for \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\"" Jan 23 23:52:43.996421 containerd[1831]: time="2026-01-23T23:52:43.996394242Z" level=info msg="Ensure that sandbox c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb in task-service has been cleanup successfully" Jan 23 23:52:43.997077 kubelet[3365]: I0123 23:52:43.997051 3365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:52:44.000539 containerd[1831]: time="2026-01-23T23:52:44.000507653Z" level=info msg="StopPodSandbox for \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\"" Jan 23 23:52:44.000705 containerd[1831]: time="2026-01-23T23:52:44.000686573Z" level=info msg="Ensure that sandbox 0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11 in task-service has been cleanup successfully" Jan 23 23:52:44.004539 containerd[1831]: time="2026-01-23T23:52:44.004489423Z" level=error msg="StopPodSandbox for \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\" failed" error="failed to destroy network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:44.005651 kubelet[3365]: E0123 23:52:44.005619 3365 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:52:44.005747 kubelet[3365]: E0123 23:52:44.005655 3365 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe"} Jan 23 23:52:44.005747 kubelet[3365]: E0123 23:52:44.005692 3365 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4b92f62-d0ce-4074-b14e-99f94c7e34c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:52:44.005747 kubelet[3365]: E0123 23:52:44.005713 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4b92f62-d0ce-4074-b14e-99f94c7e34c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d2ck2" podUID="d4b92f62-d0ce-4074-b14e-99f94c7e34c5" Jan 23 23:52:44.013379 kubelet[3365]: I0123 23:52:44.013357 3365 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:52:44.018684 containerd[1831]: time="2026-01-23T23:52:44.018650020Z" level=info msg="StopPodSandbox for \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\"" Jan 23 23:52:44.019276 containerd[1831]: time="2026-01-23T23:52:44.018808020Z" level=info msg="Ensure that sandbox 7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f in task-service has been cleanup successfully" Jan 23 23:52:44.043086 containerd[1831]: time="2026-01-23T23:52:44.043037284Z" level=error msg="StopPodSandbox for \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\" failed" error="failed to destroy network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:44.043324 kubelet[3365]: E0123 23:52:44.043281 3365 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:52:44.043383 kubelet[3365]: E0123 23:52:44.043337 3365 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27"} Jan 23 23:52:44.043409 kubelet[3365]: E0123 23:52:44.043380 3365 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86223f84-2792-4d18-8124-56ab2f35f54f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:52:44.043472 kubelet[3365]: E0123 23:52:44.043402 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86223f84-2792-4d18-8124-56ab2f35f54f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:52:44.056881 containerd[1831]: time="2026-01-23T23:52:44.056804280Z" level=error msg="StopPodSandbox for \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\" failed" error="failed to destroy network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:44.057183 kubelet[3365]: E0123 23:52:44.057043 3365 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:52:44.057183 kubelet[3365]: E0123 23:52:44.057096 3365 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f"} Jan 23 23:52:44.057183 kubelet[3365]: E0123 23:52:44.057129 3365 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d85237ab-62c3-4029-9724-6c41efba9b29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:52:44.057183 kubelet[3365]: E0123 23:52:44.057149 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d85237ab-62c3-4029-9724-6c41efba9b29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:52:44.063110 containerd[1831]: time="2026-01-23T23:52:44.062989656Z" level=error msg="StopPodSandbox for \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\" failed" error="failed to destroy network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:44.063344 kubelet[3365]: E0123 23:52:44.063237 3365 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:52:44.063344 kubelet[3365]: E0123 23:52:44.063283 3365 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18"} Jan 23 23:52:44.063344 kubelet[3365]: E0123 23:52:44.063318 3365 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e432e42b-559a-473b-8e55-fe59b8af82e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:52:44.063514 kubelet[3365]: E0123 23:52:44.063340 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e432e42b-559a-473b-8e55-fe59b8af82e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:52:44.074076 containerd[1831]: time="2026-01-23T23:52:44.073840644Z" level=error msg="StopPodSandbox for \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\" failed" error="failed to destroy network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:44.074202 kubelet[3365]: E0123 23:52:44.074071 3365 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:52:44.074202 kubelet[3365]: E0123 23:52:44.074116 3365 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb"} Jan 23 23:52:44.074202 kubelet[3365]: E0123 23:52:44.074160 3365 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e54eef2-74de-4252-afd0-c8cf40739108\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:52:44.074326 kubelet[3365]: E0123 23:52:44.074263 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e54eef2-74de-4252-afd0-c8cf40739108\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c5ccfd59c-h6qxv" podUID="3e54eef2-74de-4252-afd0-c8cf40739108" Jan 23 23:52:44.075949 containerd[1831]: time="2026-01-23T23:52:44.075730849Z" level=error msg="StopPodSandbox for \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\" failed" error="failed to destroy network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:44.076038 kubelet[3365]: E0123 23:52:44.075941 3365 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:52:44.076038 kubelet[3365]: E0123 23:52:44.075975 3365 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411"} Jan 23 23:52:44.076038 kubelet[3365]: E0123 23:52:44.075998 3365 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:52:44.076038 kubelet[3365]: E0123 23:52:44.076016 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:52:44.077919 containerd[1831]: time="2026-01-23T23:52:44.077871814Z" level=error msg="StopPodSandbox for \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\" failed" error="failed to destroy network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:52:44.078076 kubelet[3365]: E0123 23:52:44.078039 3365 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:52:44.078119 kubelet[3365]: E0123 23:52:44.078089 3365 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11"} Jan 23 23:52:44.078119 kubelet[3365]: E0123 23:52:44.078113 3365 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab0fffca-fdb6-48fb-890d-1befd8d9f70b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:52:44.078184 kubelet[3365]: E0123 23:52:44.078129 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab0fffca-fdb6-48fb-890d-1befd8d9f70b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2mmfw" podUID="ab0fffca-fdb6-48fb-890d-1befd8d9f70b" Jan 23 23:52:44.444557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27-shm.mount: Deactivated successfully. Jan 23 23:52:44.445031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11-shm.mount: Deactivated successfully. Jan 23 23:52:50.987360 kubelet[3365]: I0123 23:52:50.987170 3365 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 23:52:51.090810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844314871.mount: Deactivated successfully. Jan 23 23:52:51.258756 containerd[1831]: time="2026-01-23T23:52:51.258709829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:51.262294 containerd[1831]: time="2026-01-23T23:52:51.262253878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 23:52:51.265719 containerd[1831]: time="2026-01-23T23:52:51.265669767Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:51.272442 containerd[1831]: time="2026-01-23T23:52:51.272395024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:52:51.273118 containerd[1831]: time="2026-01-23T23:52:51.272971266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.327155236s" Jan 23 23:52:51.273118 containerd[1831]: time="2026-01-23T23:52:51.273030946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:52:51.287673 containerd[1831]: time="2026-01-23T23:52:51.285925780Z" level=info msg="CreateContainer within sandbox \"7b023b70406a4f8a53660e69df7c80eca3317928da93b48575b27ca47bc80bd3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:52:51.326584 containerd[1831]: time="2026-01-23T23:52:51.326533526Z" level=info msg="CreateContainer within sandbox \"7b023b70406a4f8a53660e69df7c80eca3317928da93b48575b27ca47bc80bd3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"04ff66239feb65dce387fd8d13a077206aa6966d381fb676614c4636e2ff685c\"" Jan 23 23:52:51.327282 containerd[1831]: time="2026-01-23T23:52:51.327254927Z" level=info msg="StartContainer for \"04ff66239feb65dce387fd8d13a077206aa6966d381fb676614c4636e2ff685c\"" Jan 23 23:52:51.384895 containerd[1831]: time="2026-01-23T23:52:51.384254516Z" level=info msg="StartContainer for \"04ff66239feb65dce387fd8d13a077206aa6966d381fb676614c4636e2ff685c\" returns successfully" Jan 23 23:52:51.637651 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:52:51.637819 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:52:51.752221 containerd[1831]: time="2026-01-23T23:52:51.752178986Z" level=info msg="StopPodSandbox for \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\"" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.865 [INFO][4583] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.865 [INFO][4583] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" iface="eth0" netns="/var/run/netns/cni-ce97182b-be64-f347-824e-2543fe2f3642" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.866 [INFO][4583] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" iface="eth0" netns="/var/run/netns/cni-ce97182b-be64-f347-824e-2543fe2f3642" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.867 [INFO][4583] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" iface="eth0" netns="/var/run/netns/cni-ce97182b-be64-f347-824e-2543fe2f3642" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.867 [INFO][4583] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.867 [INFO][4583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.897 [INFO][4592] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" HandleID="k8s-pod-network.c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.897 [INFO][4592] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.897 [INFO][4592] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.910 [WARNING][4592] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" HandleID="k8s-pod-network.c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.910 [INFO][4592] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" HandleID="k8s-pod-network.c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.912 [INFO][4592] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:51.919921 containerd[1831]: 2026-01-23 23:52:51.918 [INFO][4583] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:52:51.920673 containerd[1831]: time="2026-01-23T23:52:51.920056092Z" level=info msg="TearDown network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\" successfully" Jan 23 23:52:51.920673 containerd[1831]: time="2026-01-23T23:52:51.920080772Z" level=info msg="StopPodSandbox for \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\" returns successfully" Jan 23 23:52:52.048234 kubelet[3365]: I0123 23:52:52.048201 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e54eef2-74de-4252-afd0-c8cf40739108-whisker-ca-bundle\") pod \"3e54eef2-74de-4252-afd0-c8cf40739108\" (UID: \"3e54eef2-74de-4252-afd0-c8cf40739108\") " Jan 23 23:52:52.048892 kubelet[3365]: I0123 23:52:52.048269 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e54eef2-74de-4252-afd0-c8cf40739108-whisker-backend-key-pair\") pod \"3e54eef2-74de-4252-afd0-c8cf40739108\" (UID: \"3e54eef2-74de-4252-afd0-c8cf40739108\") " Jan 23 23:52:52.048892 kubelet[3365]: I0123 23:52:52.048289 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsd9j\" (UniqueName: \"kubernetes.io/projected/3e54eef2-74de-4252-afd0-c8cf40739108-kube-api-access-gsd9j\") pod \"3e54eef2-74de-4252-afd0-c8cf40739108\" (UID: \"3e54eef2-74de-4252-afd0-c8cf40739108\") " Jan 23 23:52:52.048892 kubelet[3365]: I0123 23:52:52.048792 3365 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e54eef2-74de-4252-afd0-c8cf40739108-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3e54eef2-74de-4252-afd0-c8cf40739108" (UID: "3e54eef2-74de-4252-afd0-c8cf40739108"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:52:52.059831 kubelet[3365]: I0123 23:52:52.059357 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kcvvr" podStartSLOduration=1.91635651 podStartE2EDuration="20.059241085s" podCreationTimestamp="2026-01-23 23:52:32 +0000 UTC" firstStartedPulling="2026-01-23 23:52:33.130831173 +0000 UTC m=+26.419691859" lastFinishedPulling="2026-01-23 23:52:51.273715748 +0000 UTC m=+44.562576434" observedRunningTime="2026-01-23 23:52:52.058203082 +0000 UTC m=+45.347063808" watchObservedRunningTime="2026-01-23 23:52:52.059241085 +0000 UTC m=+45.348101771" Jan 23 23:52:52.061527 kubelet[3365]: I0123 23:52:52.061394 3365 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e54eef2-74de-4252-afd0-c8cf40739108-kube-api-access-gsd9j" (OuterVolumeSpecName: "kube-api-access-gsd9j") pod "3e54eef2-74de-4252-afd0-c8cf40739108" (UID: "3e54eef2-74de-4252-afd0-c8cf40739108"). InnerVolumeSpecName "kube-api-access-gsd9j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:52:52.061527 kubelet[3365]: I0123 23:52:52.061477 3365 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e54eef2-74de-4252-afd0-c8cf40739108-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3e54eef2-74de-4252-afd0-c8cf40739108" (UID: "3e54eef2-74de-4252-afd0-c8cf40739108"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:52:52.093078 systemd[1]: run-netns-cni\x2dce97182b\x2dbe64\x2df347\x2d824e\x2d2543fe2f3642.mount: Deactivated successfully. Jan 23 23:52:52.093201 systemd[1]: var-lib-kubelet-pods-3e54eef2\x2d74de\x2d4252\x2dafd0\x2dc8cf40739108-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 23:52:52.093282 systemd[1]: var-lib-kubelet-pods-3e54eef2\x2d74de\x2d4252\x2dafd0\x2dc8cf40739108-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgsd9j.mount: Deactivated successfully. Jan 23 23:52:52.149274 kubelet[3365]: I0123 23:52:52.149241 3365 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e54eef2-74de-4252-afd0-c8cf40739108-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-73953443dc\" DevicePath \"\"" Jan 23 23:52:52.149274 kubelet[3365]: I0123 23:52:52.149273 3365 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gsd9j\" (UniqueName: \"kubernetes.io/projected/3e54eef2-74de-4252-afd0-c8cf40739108-kube-api-access-gsd9j\") on node \"ci-4081.3.6-n-73953443dc\" DevicePath \"\"" Jan 23 23:52:52.149274 kubelet[3365]: I0123 23:52:52.149285 3365 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e54eef2-74de-4252-afd0-c8cf40739108-whisker-ca-bundle\") on node \"ci-4081.3.6-n-73953443dc\" DevicePath \"\"" Jan 23 23:52:52.550678 kubelet[3365]: I0123 23:52:52.550516 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl6hk\" (UniqueName: \"kubernetes.io/projected/020e3cf1-f5a2-4e03-b3f1-21fc39350338-kube-api-access-hl6hk\") pod \"whisker-6bffcb8bdf-jghmp\" (UID: \"020e3cf1-f5a2-4e03-b3f1-21fc39350338\") " pod="calico-system/whisker-6bffcb8bdf-jghmp" Jan 23 23:52:52.550678 kubelet[3365]: I0123 23:52:52.550584 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/020e3cf1-f5a2-4e03-b3f1-21fc39350338-whisker-ca-bundle\") pod \"whisker-6bffcb8bdf-jghmp\" (UID: \"020e3cf1-f5a2-4e03-b3f1-21fc39350338\") " pod="calico-system/whisker-6bffcb8bdf-jghmp" Jan 23 23:52:52.550678 kubelet[3365]: I0123 23:52:52.550636 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/020e3cf1-f5a2-4e03-b3f1-21fc39350338-whisker-backend-key-pair\") pod \"whisker-6bffcb8bdf-jghmp\" (UID: \"020e3cf1-f5a2-4e03-b3f1-21fc39350338\") " pod="calico-system/whisker-6bffcb8bdf-jghmp" Jan 23 23:52:52.717968 containerd[1831]: time="2026-01-23T23:52:52.717930035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bffcb8bdf-jghmp,Uid:020e3cf1-f5a2-4e03-b3f1-21fc39350338,Namespace:calico-system,Attempt:0,}" Jan 23 23:52:52.819954 kubelet[3365]: I0123 23:52:52.817895 3365 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e54eef2-74de-4252-afd0-c8cf40739108" path="/var/lib/kubelet/pods/3e54eef2-74de-4252-afd0-c8cf40739108/volumes" Jan 23 23:52:52.881311 systemd-networkd[1410]: cali5984958fe51: Link UP Jan 23 23:52:52.882081 systemd-networkd[1410]: cali5984958fe51: Gained carrier Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.777 [INFO][4613] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.790 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0 whisker-6bffcb8bdf- calico-system 020e3cf1-f5a2-4e03-b3f1-21fc39350338 928 0 2026-01-23 23:52:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bffcb8bdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-73953443dc whisker-6bffcb8bdf-jghmp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5984958fe51 [] [] }} ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Namespace="calico-system" Pod="whisker-6bffcb8bdf-jghmp" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.790 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Namespace="calico-system" Pod="whisker-6bffcb8bdf-jghmp" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.818 [INFO][4626] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" HandleID="k8s-pod-network.93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.818 [INFO][4626] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" HandleID="k8s-pod-network.93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-73953443dc", "pod":"whisker-6bffcb8bdf-jghmp", "timestamp":"2026-01-23 23:52:52.818609691 +0000 UTC"}, Hostname:"ci-4081.3.6-n-73953443dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.818 [INFO][4626] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.818 [INFO][4626] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.820 [INFO][4626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-73953443dc' Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.832 [INFO][4626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.836 [INFO][4626] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.841 [INFO][4626] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.843 [INFO][4626] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.845 [INFO][4626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.845 [INFO][4626] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.847 [INFO][4626] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72 Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.852 [INFO][4626] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.857 [INFO][4626] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.0.1/26] block=192.168.0.0/26 handle="k8s-pod-network.93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.857 [INFO][4626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.1/26] handle="k8s-pod-network.93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.857 [INFO][4626] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:52.901737 containerd[1831]: 2026-01-23 23:52:52.857 [INFO][4626] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.0.1/26] IPv6=[] ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" HandleID="k8s-pod-network.93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" Jan 23 23:52:52.902405 containerd[1831]: 2026-01-23 23:52:52.860 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Namespace="calico-system" Pod="whisker-6bffcb8bdf-jghmp" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0", GenerateName:"whisker-6bffcb8bdf-", Namespace:"calico-system", SelfLink:"", UID:"020e3cf1-f5a2-4e03-b3f1-21fc39350338", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bffcb8bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"", Pod:"whisker-6bffcb8bdf-jghmp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.0.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5984958fe51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:52.902405 containerd[1831]: 2026-01-23 23:52:52.860 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.1/32] ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Namespace="calico-system" Pod="whisker-6bffcb8bdf-jghmp" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" Jan 23 23:52:52.902405 containerd[1831]: 2026-01-23 23:52:52.860 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5984958fe51 ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Namespace="calico-system" Pod="whisker-6bffcb8bdf-jghmp" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" Jan 23 23:52:52.902405 containerd[1831]: 2026-01-23 23:52:52.882 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Namespace="calico-system" Pod="whisker-6bffcb8bdf-jghmp" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" Jan 23 23:52:52.902405 containerd[1831]: 2026-01-23 23:52:52.883 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Namespace="calico-system" Pod="whisker-6bffcb8bdf-jghmp" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0", GenerateName:"whisker-6bffcb8bdf-", Namespace:"calico-system", SelfLink:"", UID:"020e3cf1-f5a2-4e03-b3f1-21fc39350338", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bffcb8bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72", Pod:"whisker-6bffcb8bdf-jghmp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.0.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5984958fe51", MAC:"42:81:02:0f:9e:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:52.902405 containerd[1831]: 2026-01-23 23:52:52.899 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72" Namespace="calico-system" Pod="whisker-6bffcb8bdf-jghmp" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--6bffcb8bdf--jghmp-eth0" Jan 23 23:52:52.919384 containerd[1831]: time="2026-01-23T23:52:52.919206706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:52.919384 containerd[1831]: time="2026-01-23T23:52:52.919283906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:52.919384 containerd[1831]: time="2026-01-23T23:52:52.919310786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:52.922005 containerd[1831]: time="2026-01-23T23:52:52.919830748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:52.963577 containerd[1831]: time="2026-01-23T23:52:52.963524538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bffcb8bdf-jghmp,Uid:020e3cf1-f5a2-4e03-b3f1-21fc39350338,Namespace:calico-system,Attempt:0,} returns sandbox id \"93ec1c3ffcb373329ae226ffd3fce6d25cf52e839187ce21d0c0818a93c41e72\"" Jan 23 23:52:52.965215 containerd[1831]: time="2026-01-23T23:52:52.965183663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:52:53.337887 kernel: bpftool[4802]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 23 23:52:53.432201 containerd[1831]: time="2026-01-23T23:52:53.432137247Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:52:53.438369 containerd[1831]: time="2026-01-23T23:52:53.438266902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:52:53.438369 containerd[1831]: time="2026-01-23T23:52:53.438335023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:52:53.438820 kubelet[3365]: E0123 23:52:53.438483 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:52:53.438820 kubelet[3365]: E0123 23:52:53.438535 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:52:53.443510 kubelet[3365]: E0123 23:52:53.443438 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc17560a7d374cc8a5379ccc150c7fc7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hl6hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bffcb8bdf-jghmp_calico-system(020e3cf1-f5a2-4e03-b3f1-21fc39350338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:52:53.445848 containerd[1831]: time="2026-01-23T23:52:53.445818642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:52:53.551055 systemd-networkd[1410]: vxlan.calico: Link UP Jan 23 23:52:53.551065 systemd-networkd[1410]: vxlan.calico: Gained carrier Jan 23 23:52:53.732958 containerd[1831]: time="2026-01-23T23:52:53.732488329Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:52:53.735467 containerd[1831]: time="2026-01-23T23:52:53.735423016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:52:53.735553 containerd[1831]: time="2026-01-23T23:52:53.735527496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:52:53.735923 kubelet[3365]: E0123 23:52:53.735682 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:52:53.735923 kubelet[3365]: E0123 23:52:53.735741 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:52:53.736031 kubelet[3365]: E0123 23:52:53.735872 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl6hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bffcb8bdf-jghmp_calico-system(020e3cf1-f5a2-4e03-b3f1-21fc39350338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:52:53.737343 kubelet[3365]: E0123 23:52:53.737290 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:52:54.045963 kubelet[3365]: E0123 23:52:54.045915 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:52:54.748090 systemd-networkd[1410]: vxlan.calico: Gained IPv6LL Jan 23 23:52:54.818727 containerd[1831]: time="2026-01-23T23:52:54.818426563Z" level=info msg="StopPodSandbox for \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\"" Jan 23 23:52:54.820134 containerd[1831]: time="2026-01-23T23:52:54.819792726Z" level=info msg="StopPodSandbox for \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\"" Jan 23 23:52:54.820551 containerd[1831]: time="2026-01-23T23:52:54.820454888Z" level=info msg="StopPodSandbox for \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\"" Jan 23 23:52:54.940744 systemd-networkd[1410]: cali5984958fe51: Gained IPv6LL Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.905 [INFO][4907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.905 [INFO][4907] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" iface="eth0" netns="/var/run/netns/cni-b15527c4-0568-38b8-90eb-0b8fda16fbe4" Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.906 [INFO][4907] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" iface="eth0" netns="/var/run/netns/cni-b15527c4-0568-38b8-90eb-0b8fda16fbe4" Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.907 [INFO][4907] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" iface="eth0" netns="/var/run/netns/cni-b15527c4-0568-38b8-90eb-0b8fda16fbe4" Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.907 [INFO][4907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.907 [INFO][4907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.946 [INFO][4927] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" HandleID="k8s-pod-network.7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.946 [INFO][4927] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.946 [INFO][4927] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.969 [WARNING][4927] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" HandleID="k8s-pod-network.7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.969 [INFO][4927] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" HandleID="k8s-pod-network.7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.972 [INFO][4927] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:54.979123 containerd[1831]: 2026-01-23 23:52:54.976 [INFO][4907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:52:54.979719 containerd[1831]: time="2026-01-23T23:52:54.979690692Z" level=info msg="TearDown network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\" successfully" Jan 23 23:52:54.979801 containerd[1831]: time="2026-01-23T23:52:54.979788452Z" level=info msg="StopPodSandbox for \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\" returns successfully" Jan 23 23:52:54.980675 containerd[1831]: time="2026-01-23T23:52:54.980648214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bfff8d8c9-qd78x,Uid:4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24,Namespace:calico-system,Attempt:1,}" Jan 23 23:52:54.986229 systemd[1]: run-netns-cni\x2db15527c4\x2d0568\x2d38b8\x2d90eb\x2d0b8fda16fbe4.mount: Deactivated successfully. Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.935 [INFO][4909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.936 [INFO][4909] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" iface="eth0" netns="/var/run/netns/cni-b167e424-4fdd-e591-060c-d7137156ca26" Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.936 [INFO][4909] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" iface="eth0" netns="/var/run/netns/cni-b167e424-4fdd-e591-060c-d7137156ca26" Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.936 [INFO][4909] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" iface="eth0" netns="/var/run/netns/cni-b167e424-4fdd-e591-060c-d7137156ca26" Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.937 [INFO][4909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.937 [INFO][4909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.983 [INFO][4933] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" HandleID="k8s-pod-network.0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.983 [INFO][4933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.983 [INFO][4933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.993 [WARNING][4933] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" HandleID="k8s-pod-network.0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.993 [INFO][4933] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" HandleID="k8s-pod-network.0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.995 [INFO][4933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:55.001939 containerd[1831]: 2026-01-23 23:52:54.999 [INFO][4909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:52:55.003203 containerd[1831]: time="2026-01-23T23:52:55.003159911Z" level=info msg="TearDown network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\" successfully" Jan 23 23:52:55.003303 containerd[1831]: time="2026-01-23T23:52:55.003289832Z" level=info msg="StopPodSandbox for \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\" returns successfully" Jan 23 23:52:55.004545 containerd[1831]: time="2026-01-23T23:52:55.004270154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mmfw,Uid:ab0fffca-fdb6-48fb-890d-1befd8d9f70b,Namespace:kube-system,Attempt:1,}" Jan 23 23:52:55.010299 systemd[1]: run-netns-cni\x2db167e424\x2d4fdd\x2de591\x2d060c\x2dd7137156ca26.mount: Deactivated successfully. Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:54.970 [INFO][4908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:54.971 [INFO][4908] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" iface="eth0" netns="/var/run/netns/cni-274b1111-138e-0fa3-6c38-95b612b88a03" Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:54.971 [INFO][4908] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" iface="eth0" netns="/var/run/netns/cni-274b1111-138e-0fa3-6c38-95b612b88a03" Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:54.971 [INFO][4908] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" iface="eth0" netns="/var/run/netns/cni-274b1111-138e-0fa3-6c38-95b612b88a03" Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:54.971 [INFO][4908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:54.971 [INFO][4908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:55.014 [INFO][4939] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" HandleID="k8s-pod-network.04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:55.014 [INFO][4939] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:55.014 [INFO][4939] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:55.023 [WARNING][4939] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" HandleID="k8s-pod-network.04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:55.023 [INFO][4939] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" HandleID="k8s-pod-network.04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:55.024 [INFO][4939] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:55.028506 containerd[1831]: 2026-01-23 23:52:55.026 [INFO][4908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:52:55.029504 containerd[1831]: time="2026-01-23T23:52:55.029366338Z" level=info msg="TearDown network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\" successfully" Jan 23 23:52:55.029504 containerd[1831]: time="2026-01-23T23:52:55.029396058Z" level=info msg="StopPodSandbox for \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\" returns successfully" Jan 23 23:52:55.032026 systemd[1]: run-netns-cni\x2d274b1111\x2d138e\x2d0fa3\x2d6c38\x2d95b612b88a03.mount: Deactivated successfully. Jan 23 23:52:55.033598 containerd[1831]: time="2026-01-23T23:52:55.033061027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mj4bl,Uid:e6b6e508-b275-4ee9-aa24-d58a31eb441c,Namespace:calico-system,Attempt:1,}" Jan 23 23:52:56.510809 systemd-networkd[1410]: cali75629270245: Link UP Jan 23 23:52:56.511939 systemd-networkd[1410]: cali75629270245: Gained carrier Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.400 [INFO][4951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0 coredns-668d6bf9bc- kube-system ab0fffca-fdb6-48fb-890d-1befd8d9f70b 956 0 2026-01-23 23:52:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-73953443dc coredns-668d6bf9bc-2mmfw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali75629270245 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mmfw" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.401 [INFO][4951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mmfw" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.453 [INFO][4986] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" HandleID="k8s-pod-network.66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.454 [INFO][4986] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" HandleID="k8s-pod-network.66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3700), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-73953443dc", "pod":"coredns-668d6bf9bc-2mmfw", "timestamp":"2026-01-23 23:52:56.45372243 +0000 UTC"}, Hostname:"ci-4081.3.6-n-73953443dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.454 [INFO][4986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.454 [INFO][4986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.454 [INFO][4986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-73953443dc' Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.468 [INFO][4986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.474 [INFO][4986] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.479 [INFO][4986] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.481 [INFO][4986] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.483 [INFO][4986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.483 [INFO][4986] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.484 [INFO][4986] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64 Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.491 [INFO][4986] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.501 [INFO][4986] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.0.2/26] block=192.168.0.0/26 handle="k8s-pod-network.66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.501 [INFO][4986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.2/26] handle="k8s-pod-network.66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.501 [INFO][4986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:56.537018 containerd[1831]: 2026-01-23 23:52:56.501 [INFO][4986] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.0.2/26] IPv6=[] ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" HandleID="k8s-pod-network.66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:56.537808 containerd[1831]: 2026-01-23 23:52:56.505 [INFO][4951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mmfw" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ab0fffca-fdb6-48fb-890d-1befd8d9f70b", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"", Pod:"coredns-668d6bf9bc-2mmfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75629270245", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:56.537808 containerd[1831]: 2026-01-23 23:52:56.505 [INFO][4951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.2/32] ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mmfw" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:56.537808 containerd[1831]: 2026-01-23 23:52:56.505 [INFO][4951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75629270245 ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mmfw" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:56.537808 containerd[1831]: 2026-01-23 23:52:56.513 [INFO][4951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mmfw" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:56.537808 containerd[1831]: 2026-01-23 23:52:56.515 [INFO][4951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mmfw" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ab0fffca-fdb6-48fb-890d-1befd8d9f70b", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64", Pod:"coredns-668d6bf9bc-2mmfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75629270245", MAC:"1e:b9:ff:99:05:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:56.537808 containerd[1831]: 2026-01-23 23:52:56.533 [INFO][4951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64" Namespace="kube-system" Pod="coredns-668d6bf9bc-2mmfw" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:52:56.561692 containerd[1831]: time="2026-01-23T23:52:56.561592304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:56.561692 containerd[1831]: time="2026-01-23T23:52:56.561643584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:56.562077 containerd[1831]: time="2026-01-23T23:52:56.561672744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:56.562077 containerd[1831]: time="2026-01-23T23:52:56.561763824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:56.642622 systemd-networkd[1410]: calia2bb7d55976: Link UP Jan 23 23:52:56.644185 systemd-networkd[1410]: calia2bb7d55976: Gained carrier Jan 23 23:52:56.647280 containerd[1831]: time="2026-01-23T23:52:56.647235881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2mmfw,Uid:ab0fffca-fdb6-48fb-890d-1befd8d9f70b,Namespace:kube-system,Attempt:1,} returns sandbox id \"66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64\"" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.416 [INFO][4961] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0 goldmane-666569f655- calico-system e6b6e508-b275-4ee9-aa24-d58a31eb441c 957 0 2026-01-23 23:52:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-73953443dc goldmane-666569f655-mj4bl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia2bb7d55976 [] [] }} ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Namespace="calico-system" Pod="goldmane-666569f655-mj4bl" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.416 [INFO][4961] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Namespace="calico-system" Pod="goldmane-666569f655-mj4bl" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.473 [INFO][4993] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" HandleID="k8s-pod-network.78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.473 [INFO][4993] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" HandleID="k8s-pod-network.78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bb20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-73953443dc", "pod":"goldmane-666569f655-mj4bl", "timestamp":"2026-01-23 23:52:56.473631801 +0000 UTC"}, Hostname:"ci-4081.3.6-n-73953443dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.473 [INFO][4993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.502 [INFO][4993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.502 [INFO][4993] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-73953443dc' Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.569 [INFO][4993] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.574 [INFO][4993] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.584 [INFO][4993] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.587 [INFO][4993] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.591 [INFO][4993] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.592 [INFO][4993] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.594 [INFO][4993] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.599 [INFO][4993] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.619 [INFO][4993] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.0.3/26] block=192.168.0.0/26 handle="k8s-pod-network.78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.619 [INFO][4993] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.3/26] handle="k8s-pod-network.78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.619 [INFO][4993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:56.674969 containerd[1831]: 2026-01-23 23:52:56.619 [INFO][4993] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.0.3/26] IPv6=[] ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" HandleID="k8s-pod-network.78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:56.675709 containerd[1831]: 2026-01-23 23:52:56.630 [INFO][4961] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Namespace="calico-system" Pod="goldmane-666569f655-mj4bl" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e6b6e508-b275-4ee9-aa24-d58a31eb441c", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"", Pod:"goldmane-666569f655-mj4bl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia2bb7d55976", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:56.675709 containerd[1831]: 2026-01-23 23:52:56.630 [INFO][4961] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.3/32] ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Namespace="calico-system" Pod="goldmane-666569f655-mj4bl" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:56.675709 containerd[1831]: 2026-01-23 23:52:56.630 [INFO][4961] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2bb7d55976 ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Namespace="calico-system" Pod="goldmane-666569f655-mj4bl" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:56.675709 containerd[1831]: 2026-01-23 23:52:56.642 [INFO][4961] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Namespace="calico-system" Pod="goldmane-666569f655-mj4bl" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:56.675709 containerd[1831]: 2026-01-23 23:52:56.645 [INFO][4961] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Namespace="calico-system" Pod="goldmane-666569f655-mj4bl" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e6b6e508-b275-4ee9-aa24-d58a31eb441c", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e", Pod:"goldmane-666569f655-mj4bl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia2bb7d55976", MAC:"d2:8a:7a:1c:46:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:56.675709 containerd[1831]: 2026-01-23 23:52:56.667 [INFO][4961] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e" Namespace="calico-system" Pod="goldmane-666569f655-mj4bl" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:52:56.707641 containerd[1831]: time="2026-01-23T23:52:56.707452554Z" level=info msg="CreateContainer within sandbox \"66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:52:56.732532 containerd[1831]: time="2026-01-23T23:52:56.732249297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:56.732532 containerd[1831]: time="2026-01-23T23:52:56.732307137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:56.732532 containerd[1831]: time="2026-01-23T23:52:56.732329937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:56.733568 containerd[1831]: time="2026-01-23T23:52:56.732549377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:56.767999 systemd-networkd[1410]: cali2b2b84d26f6: Link UP Jan 23 23:52:56.769621 containerd[1831]: time="2026-01-23T23:52:56.768267148Z" level=info msg="CreateContainer within sandbox \"66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46f85865df600d260d01cb2c771a9155d8ba2a2713a6a77df2ece94af92502f5\"" Jan 23 23:52:56.769106 systemd-networkd[1410]: cali2b2b84d26f6: Gained carrier Jan 23 23:52:56.772929 containerd[1831]: time="2026-01-23T23:52:56.770351713Z" level=info msg="StartContainer for \"46f85865df600d260d01cb2c771a9155d8ba2a2713a6a77df2ece94af92502f5\"" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.435 [INFO][4959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0 calico-kube-controllers-6bfff8d8c9- calico-system 4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24 955 0 2026-01-23 23:52:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bfff8d8c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-73953443dc calico-kube-controllers-6bfff8d8c9-qd78x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2b2b84d26f6 [] [] }} ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Namespace="calico-system" Pod="calico-kube-controllers-6bfff8d8c9-qd78x" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.435 [INFO][4959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Namespace="calico-system" Pod="calico-kube-controllers-6bfff8d8c9-qd78x" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.476 [INFO][4998] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" HandleID="k8s-pod-network.8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.477 [INFO][4998] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" HandleID="k8s-pod-network.8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-73953443dc", "pod":"calico-kube-controllers-6bfff8d8c9-qd78x", "timestamp":"2026-01-23 23:52:56.476566608 +0000 UTC"}, Hostname:"ci-4081.3.6-n-73953443dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.477 [INFO][4998] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.620 [INFO][4998] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.620 [INFO][4998] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-73953443dc' Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.669 [INFO][4998] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.687 [INFO][4998] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.702 [INFO][4998] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.708 [INFO][4998] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.713 [INFO][4998] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.713 [INFO][4998] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.717 [INFO][4998] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.734 [INFO][4998] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.753 [INFO][4998] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.0.4/26] block=192.168.0.0/26 handle="k8s-pod-network.8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.755 [INFO][4998] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.4/26] handle="k8s-pod-network.8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.755 [INFO][4998] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:56.810366 containerd[1831]: 2026-01-23 23:52:56.756 [INFO][4998] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.0.4/26] IPv6=[] ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" HandleID="k8s-pod-network.8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:56.810928 containerd[1831]: 2026-01-23 23:52:56.761 [INFO][4959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Namespace="calico-system" Pod="calico-kube-controllers-6bfff8d8c9-qd78x" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0", GenerateName:"calico-kube-controllers-6bfff8d8c9-", Namespace:"calico-system", SelfLink:"", UID:"4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bfff8d8c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"", Pod:"calico-kube-controllers-6bfff8d8c9-qd78x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b2b84d26f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:56.810928 containerd[1831]: 2026-01-23 23:52:56.762 [INFO][4959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.4/32] ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Namespace="calico-system" Pod="calico-kube-controllers-6bfff8d8c9-qd78x" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:56.810928 containerd[1831]: 2026-01-23 23:52:56.762 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b2b84d26f6 ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Namespace="calico-system" Pod="calico-kube-controllers-6bfff8d8c9-qd78x" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:56.810928 containerd[1831]: 2026-01-23 23:52:56.775 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Namespace="calico-system" Pod="calico-kube-controllers-6bfff8d8c9-qd78x" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:56.810928 containerd[1831]: 2026-01-23 23:52:56.787 [INFO][4959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Namespace="calico-system" Pod="calico-kube-controllers-6bfff8d8c9-qd78x" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0", GenerateName:"calico-kube-controllers-6bfff8d8c9-", Namespace:"calico-system", SelfLink:"", UID:"4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bfff8d8c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b", Pod:"calico-kube-controllers-6bfff8d8c9-qd78x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b2b84d26f6", MAC:"3e:f8:ff:c5:22:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:56.810928 containerd[1831]: 2026-01-23 23:52:56.803 [INFO][4959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b" Namespace="calico-system" Pod="calico-kube-controllers-6bfff8d8c9-qd78x" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:52:56.862936 containerd[1831]: time="2026-01-23T23:52:56.862183106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:56.862936 containerd[1831]: time="2026-01-23T23:52:56.862244306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:56.862936 containerd[1831]: time="2026-01-23T23:52:56.862255586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:56.862936 containerd[1831]: time="2026-01-23T23:52:56.862331547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:56.874757 containerd[1831]: time="2026-01-23T23:52:56.874649018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mj4bl,Uid:e6b6e508-b275-4ee9-aa24-d58a31eb441c,Namespace:calico-system,Attempt:1,} returns sandbox id \"78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e\"" Jan 23 23:52:56.878528 containerd[1831]: time="2026-01-23T23:52:56.878449507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:52:56.893898 containerd[1831]: time="2026-01-23T23:52:56.891597741Z" level=info msg="StartContainer for \"46f85865df600d260d01cb2c771a9155d8ba2a2713a6a77df2ece94af92502f5\" returns successfully" Jan 23 23:52:56.942541 containerd[1831]: time="2026-01-23T23:52:56.942044949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bfff8d8c9-qd78x,Uid:4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24,Namespace:calico-system,Attempt:1,} returns sandbox id \"8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b\"" Jan 23 23:52:57.078523 kubelet[3365]: I0123 23:52:57.077152 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2mmfw" podStartSLOduration=45.077136171 podStartE2EDuration="45.077136171s" podCreationTimestamp="2026-01-23 23:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:52:57.075133246 +0000 UTC m=+50.363993932" watchObservedRunningTime="2026-01-23 23:52:57.077136171 +0000 UTC m=+50.365996817" Jan 23 23:52:57.171561 containerd[1831]: time="2026-01-23T23:52:57.171430851Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:52:57.174717 containerd[1831]: time="2026-01-23T23:52:57.174635459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:52:57.174717 containerd[1831]: time="2026-01-23T23:52:57.174699819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:52:57.175373 kubelet[3365]: E0123 23:52:57.174941 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:52:57.175373 kubelet[3365]: E0123 23:52:57.174989 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:52:57.175373 kubelet[3365]: E0123 23:52:57.175201 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlpxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mj4bl_calico-system(e6b6e508-b275-4ee9-aa24-d58a31eb441c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:52:57.175765 containerd[1831]: time="2026-01-23T23:52:57.175740901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:52:57.177130 kubelet[3365]: E0123 23:52:57.177100 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:52:57.439684 containerd[1831]: time="2026-01-23T23:52:57.439567811Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:52:57.442986 containerd[1831]: time="2026-01-23T23:52:57.442718979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:52:57.442986 containerd[1831]: time="2026-01-23T23:52:57.442799539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:52:57.443120 kubelet[3365]: E0123 23:52:57.443035 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:52:57.443120 kubelet[3365]: E0123 23:52:57.443098 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:52:57.443898 kubelet[3365]: E0123 23:52:57.443244 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfbdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bfff8d8c9-qd78x_calico-system(4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:52:57.444654 kubelet[3365]: E0123 23:52:57.444612 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:52:57.816071 containerd[1831]: time="2026-01-23T23:52:57.815901245Z" level=info msg="StopPodSandbox for \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\"" Jan 23 23:52:57.816543 containerd[1831]: time="2026-01-23T23:52:57.816017885Z" level=info msg="StopPodSandbox for \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\"" Jan 23 23:52:57.885243 systemd-networkd[1410]: cali2b2b84d26f6: Gained IPv6LL Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.890 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.890 [INFO][5223] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" iface="eth0" netns="/var/run/netns/cni-da472579-4da9-bc81-92b1-e56112867d9d" Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.892 [INFO][5223] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" iface="eth0" netns="/var/run/netns/cni-da472579-4da9-bc81-92b1-e56112867d9d" Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.893 [INFO][5223] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" iface="eth0" netns="/var/run/netns/cni-da472579-4da9-bc81-92b1-e56112867d9d" Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.893 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.893 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.923 [INFO][5236] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" HandleID="k8s-pod-network.903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.923 [INFO][5236] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.924 [INFO][5236] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.932 [WARNING][5236] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" HandleID="k8s-pod-network.903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.932 [INFO][5236] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" HandleID="k8s-pod-network.903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.934 [INFO][5236] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:57.938362 containerd[1831]: 2026-01-23 23:52:57.935 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:52:57.938362 containerd[1831]: time="2026-01-23T23:52:57.938198555Z" level=info msg="TearDown network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\" successfully" Jan 23 23:52:57.938362 containerd[1831]: time="2026-01-23T23:52:57.938232755Z" level=info msg="StopPodSandbox for \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\" returns successfully" Jan 23 23:52:57.944116 systemd[1]: run-netns-cni\x2dda472579\x2d4da9\x2dbc81\x2d92b1\x2de56112867d9d.mount: Deactivated successfully. Jan 23 23:52:57.947424 containerd[1831]: time="2026-01-23T23:52:57.947386219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8fbd4d54-2csvx,Uid:e432e42b-559a-473b-8e55-fe59b8af82e5,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.901 [INFO][5227] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.902 [INFO][5227] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" iface="eth0" netns="/var/run/netns/cni-19c00364-511b-d69c-fae1-42954a7b411e" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.902 [INFO][5227] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" iface="eth0" netns="/var/run/netns/cni-19c00364-511b-d69c-fae1-42954a7b411e" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.902 [INFO][5227] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" iface="eth0" netns="/var/run/netns/cni-19c00364-511b-d69c-fae1-42954a7b411e" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.902 [INFO][5227] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.902 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.935 [INFO][5241] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" HandleID="k8s-pod-network.7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.935 [INFO][5241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.935 [INFO][5241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.951 [WARNING][5241] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" HandleID="k8s-pod-network.7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.951 [INFO][5241] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" HandleID="k8s-pod-network.7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.953 [INFO][5241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:57.958408 containerd[1831]: 2026-01-23 23:52:57.955 [INFO][5227] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:52:57.959361 containerd[1831]: time="2026-01-23T23:52:57.958795167Z" level=info msg="TearDown network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\" successfully" Jan 23 23:52:57.959361 containerd[1831]: time="2026-01-23T23:52:57.958824928Z" level=info msg="StopPodSandbox for \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\" returns successfully" Jan 23 23:52:57.963124 systemd[1]: run-netns-cni\x2d19c00364\x2d511b\x2dd69c\x2dfae1\x2d42954a7b411e.mount: Deactivated successfully. Jan 23 23:52:57.963250 containerd[1831]: time="2026-01-23T23:52:57.963109938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zg499,Uid:d85237ab-62c3-4029-9724-6c41efba9b29,Namespace:calico-system,Attempt:1,}" Jan 23 23:52:58.069109 kubelet[3365]: E0123 23:52:58.067629 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:52:58.071493 kubelet[3365]: E0123 23:52:58.068779 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:52:58.169510 systemd-networkd[1410]: cali7a55bdeaf48: Link UP Jan 23 23:52:58.171319 systemd-networkd[1410]: cali7a55bdeaf48: Gained carrier Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.054 [INFO][5249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0 calico-apiserver-6c8fbd4d54- calico-apiserver e432e42b-559a-473b-8e55-fe59b8af82e5 994 0 2026-01-23 23:52:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c8fbd4d54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-73953443dc calico-apiserver-6c8fbd4d54-2csvx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a55bdeaf48 [] [] }} ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-2csvx" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.054 [INFO][5249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-2csvx" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.105 [INFO][5274] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" HandleID="k8s-pod-network.a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.105 [INFO][5274] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" HandleID="k8s-pod-network.a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-73953443dc", "pod":"calico-apiserver-6c8fbd4d54-2csvx", "timestamp":"2026-01-23 23:52:58.105110059 +0000 UTC"}, Hostname:"ci-4081.3.6-n-73953443dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.105 [INFO][5274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.105 [INFO][5274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.105 [INFO][5274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-73953443dc' Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.123 [INFO][5274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.129 [INFO][5274] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.134 [INFO][5274] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.136 [INFO][5274] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.139 [INFO][5274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.139 [INFO][5274] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.141 [INFO][5274] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01 Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.150 [INFO][5274] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.156 [INFO][5274] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.0.5/26] block=192.168.0.0/26 handle="k8s-pod-network.a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.157 [INFO][5274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.5/26] handle="k8s-pod-network.a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.157 [INFO][5274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:58.189523 containerd[1831]: 2026-01-23 23:52:58.157 [INFO][5274] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.0.5/26] IPv6=[] ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" HandleID="k8s-pod-network.a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:58.190712 containerd[1831]: 2026-01-23 23:52:58.162 [INFO][5249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-2csvx" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0", GenerateName:"calico-apiserver-6c8fbd4d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e432e42b-559a-473b-8e55-fe59b8af82e5", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8fbd4d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"", Pod:"calico-apiserver-6c8fbd4d54-2csvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a55bdeaf48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:58.190712 containerd[1831]: 2026-01-23 23:52:58.162 [INFO][5249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.5/32] ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-2csvx" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:58.190712 containerd[1831]: 2026-01-23 23:52:58.162 [INFO][5249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a55bdeaf48 ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-2csvx" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:58.190712 containerd[1831]: 2026-01-23 23:52:58.171 [INFO][5249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-2csvx" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:58.190712 containerd[1831]: 2026-01-23 23:52:58.171 [INFO][5249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-2csvx" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0", GenerateName:"calico-apiserver-6c8fbd4d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e432e42b-559a-473b-8e55-fe59b8af82e5", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8fbd4d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01", Pod:"calico-apiserver-6c8fbd4d54-2csvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a55bdeaf48", MAC:"fa:8f:ba:ca:51:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:58.190712 containerd[1831]: 2026-01-23 23:52:58.186 [INFO][5249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-2csvx" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:52:58.213108 containerd[1831]: time="2026-01-23T23:52:58.213031412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:58.213431 containerd[1831]: time="2026-01-23T23:52:58.213270333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:58.213431 containerd[1831]: time="2026-01-23T23:52:58.213300493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:58.213521 containerd[1831]: time="2026-01-23T23:52:58.213417853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:58.270712 systemd-networkd[1410]: calib5a6c6457f4: Link UP Jan 23 23:52:58.275368 systemd-networkd[1410]: calib5a6c6457f4: Gained carrier Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.098 [INFO][5259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0 csi-node-driver- calico-system d85237ab-62c3-4029-9724-6c41efba9b29 995 0 2026-01-23 23:52:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-73953443dc csi-node-driver-zg499 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib5a6c6457f4 [] [] }} ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Namespace="calico-system" Pod="csi-node-driver-zg499" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.098 [INFO][5259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Namespace="calico-system" Pod="csi-node-driver-zg499" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.148 [INFO][5282] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" HandleID="k8s-pod-network.18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.149 [INFO][5282] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" HandleID="k8s-pod-network.18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-73953443dc", "pod":"csi-node-driver-zg499", "timestamp":"2026-01-23 23:52:58.14889485 +0000 UTC"}, Hostname:"ci-4081.3.6-n-73953443dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.149 [INFO][5282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.157 [INFO][5282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.157 [INFO][5282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-73953443dc' Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.223 [INFO][5282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.231 [INFO][5282] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.239 [INFO][5282] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.241 [INFO][5282] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.244 [INFO][5282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.244 [INFO][5282] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.246 [INFO][5282] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.251 [INFO][5282] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.263 [INFO][5282] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.0.6/26] block=192.168.0.0/26 handle="k8s-pod-network.18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.263 [INFO][5282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.6/26] handle="k8s-pod-network.18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.263 [INFO][5282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:58.294520 containerd[1831]: 2026-01-23 23:52:58.263 [INFO][5282] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.0.6/26] IPv6=[] ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" HandleID="k8s-pod-network.18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:58.296233 containerd[1831]: 2026-01-23 23:52:58.266 [INFO][5259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Namespace="calico-system" Pod="csi-node-driver-zg499" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d85237ab-62c3-4029-9724-6c41efba9b29", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"", Pod:"csi-node-driver-zg499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5a6c6457f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:58.296233 containerd[1831]: 2026-01-23 23:52:58.266 [INFO][5259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.6/32] ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Namespace="calico-system" Pod="csi-node-driver-zg499" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:58.296233 containerd[1831]: 2026-01-23 23:52:58.266 [INFO][5259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5a6c6457f4 ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Namespace="calico-system" Pod="csi-node-driver-zg499" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:58.296233 containerd[1831]: 2026-01-23 23:52:58.268 [INFO][5259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Namespace="calico-system" Pod="csi-node-driver-zg499" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:58.296233 containerd[1831]: 2026-01-23 23:52:58.269 [INFO][5259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Namespace="calico-system" Pod="csi-node-driver-zg499" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d85237ab-62c3-4029-9724-6c41efba9b29", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a", Pod:"csi-node-driver-zg499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5a6c6457f4", MAC:"be:97:4c:17:f6:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:58.296233 containerd[1831]: 2026-01-23 23:52:58.290 [INFO][5259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a" Namespace="calico-system" Pod="csi-node-driver-zg499" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:52:58.298037 containerd[1831]: time="2026-01-23T23:52:58.298007988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8fbd4d54-2csvx,Uid:e432e42b-559a-473b-8e55-fe59b8af82e5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01\"" Jan 23 23:52:58.302908 containerd[1831]: time="2026-01-23T23:52:58.302344519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:52:58.326178 containerd[1831]: time="2026-01-23T23:52:58.325755658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:58.326178 containerd[1831]: time="2026-01-23T23:52:58.325815698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:58.326178 containerd[1831]: time="2026-01-23T23:52:58.325831018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:58.326654 containerd[1831]: time="2026-01-23T23:52:58.326479180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:58.332015 systemd-networkd[1410]: cali75629270245: Gained IPv6LL Jan 23 23:52:58.355343 systemd[1]: run-containerd-runc-k8s.io-18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a-runc.hLgrt8.mount: Deactivated successfully. Jan 23 23:52:58.376197 containerd[1831]: time="2026-01-23T23:52:58.376150226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zg499,Uid:d85237ab-62c3-4029-9724-6c41efba9b29,Namespace:calico-system,Attempt:1,} returns sandbox id \"18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a\"" Jan 23 23:52:58.523969 systemd-networkd[1410]: calia2bb7d55976: Gained IPv6LL Jan 23 23:52:58.579062 containerd[1831]: time="2026-01-23T23:52:58.578751820Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:52:58.583072 containerd[1831]: time="2026-01-23T23:52:58.582954670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:52:58.583072 containerd[1831]: time="2026-01-23T23:52:58.583025231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:52:58.583293 kubelet[3365]: E0123 23:52:58.583187 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:52:58.583293 kubelet[3365]: E0123 23:52:58.583230 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:52:58.584322 kubelet[3365]: E0123 23:52:58.583791 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nffqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8fbd4d54-2csvx_calico-apiserver(e432e42b-559a-473b-8e55-fe59b8af82e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:52:58.584423 containerd[1831]: time="2026-01-23T23:52:58.584183354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:52:58.585356 kubelet[3365]: E0123 23:52:58.585285 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:52:58.816702 containerd[1831]: time="2026-01-23T23:52:58.816413743Z" level=info msg="StopPodSandbox for \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\"" Jan 23 23:52:58.816702 containerd[1831]: time="2026-01-23T23:52:58.816684503Z" level=info msg="StopPodSandbox for \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\"" Jan 23 23:52:58.875687 containerd[1831]: time="2026-01-23T23:52:58.875211812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:52:58.879444 containerd[1831]: time="2026-01-23T23:52:58.879279702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:52:58.879444 containerd[1831]: time="2026-01-23T23:52:58.879410942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:52:58.880850 kubelet[3365]: E0123 23:52:58.879846 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:52:58.880850 kubelet[3365]: E0123 23:52:58.880598 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:52:58.891871 kubelet[3365]: E0123 23:52:58.890419 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kltkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:52:58.923578 containerd[1831]: time="2026-01-23T23:52:58.922342171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.887 [INFO][5408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.887 [INFO][5408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" iface="eth0" netns="/var/run/netns/cni-dd25dce2-ae18-b380-e080-d0f49d9beb5d" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.889 [INFO][5408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" iface="eth0" netns="/var/run/netns/cni-dd25dce2-ae18-b380-e080-d0f49d9beb5d" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.889 [INFO][5408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" iface="eth0" netns="/var/run/netns/cni-dd25dce2-ae18-b380-e080-d0f49d9beb5d" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.889 [INFO][5408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.889 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.943 [INFO][5425] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" HandleID="k8s-pod-network.d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.943 [INFO][5425] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.943 [INFO][5425] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.954 [WARNING][5425] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" HandleID="k8s-pod-network.d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.954 [INFO][5425] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" HandleID="k8s-pod-network.d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.956 [INFO][5425] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:58.966289 containerd[1831]: 2026-01-23 23:52:58.958 [INFO][5408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:52:58.963729 systemd[1]: run-netns-cni\x2ddd25dce2\x2dae18\x2db380\x2de080\x2dd0f49d9beb5d.mount: Deactivated successfully. Jan 23 23:52:58.967625 containerd[1831]: time="2026-01-23T23:52:58.966657004Z" level=info msg="TearDown network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\" successfully" Jan 23 23:52:58.967625 containerd[1831]: time="2026-01-23T23:52:58.966687684Z" level=info msg="StopPodSandbox for \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\" returns successfully" Jan 23 23:52:58.972877 containerd[1831]: time="2026-01-23T23:52:58.972444738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8fbd4d54-j9z4b,Uid:86223f84-2792-4d18-8124-56ab2f35f54f,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.907 [INFO][5412] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.909 [INFO][5412] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" iface="eth0" netns="/var/run/netns/cni-7e320b18-ae2a-32dc-1aff-7e41029f2bf1" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.909 [INFO][5412] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" iface="eth0" netns="/var/run/netns/cni-7e320b18-ae2a-32dc-1aff-7e41029f2bf1" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.911 [INFO][5412] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" iface="eth0" netns="/var/run/netns/cni-7e320b18-ae2a-32dc-1aff-7e41029f2bf1" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.912 [INFO][5412] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.912 [INFO][5412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.944 [INFO][5434] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" HandleID="k8s-pod-network.939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.944 [INFO][5434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.956 [INFO][5434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.970 [WARNING][5434] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" HandleID="k8s-pod-network.939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.970 [INFO][5434] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" HandleID="k8s-pod-network.939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.972 [INFO][5434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:58.977653 containerd[1831]: 2026-01-23 23:52:58.974 [INFO][5412] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:52:58.978607 containerd[1831]: time="2026-01-23T23:52:58.977836632Z" level=info msg="TearDown network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\" successfully" Jan 23 23:52:58.978607 containerd[1831]: time="2026-01-23T23:52:58.977872072Z" level=info msg="StopPodSandbox for \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\" returns successfully" Jan 23 23:52:58.979628 containerd[1831]: time="2026-01-23T23:52:58.979429116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2ck2,Uid:d4b92f62-d0ce-4074-b14e-99f94c7e34c5,Namespace:kube-system,Attempt:1,}" Jan 23 23:52:59.079374 kubelet[3365]: E0123 23:52:59.079243 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:52:59.193075 systemd-networkd[1410]: calie4283efa2c2: Link UP Jan 23 23:52:59.193300 systemd-networkd[1410]: calie4283efa2c2: Gained carrier Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.095 [INFO][5444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0 calico-apiserver-6c8fbd4d54- calico-apiserver 86223f84-2792-4d18-8124-56ab2f35f54f 1018 0 2026-01-23 23:52:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c8fbd4d54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-73953443dc calico-apiserver-6c8fbd4d54-j9z4b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie4283efa2c2 [] [] }} ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-j9z4b" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.095 [INFO][5444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-j9z4b" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.135 [INFO][5473] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" HandleID="k8s-pod-network.e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.135 [INFO][5473] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" HandleID="k8s-pod-network.e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-73953443dc", "pod":"calico-apiserver-6c8fbd4d54-j9z4b", "timestamp":"2026-01-23 23:52:59.135616272 +0000 UTC"}, Hostname:"ci-4081.3.6-n-73953443dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.136 [INFO][5473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.136 [INFO][5473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.136 [INFO][5473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-73953443dc' Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.150 [INFO][5473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.156 [INFO][5473] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.160 [INFO][5473] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.162 [INFO][5473] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.164 [INFO][5473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.165 [INFO][5473] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.166 [INFO][5473] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.172 [INFO][5473] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.183 [INFO][5473] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.0.7/26] block=192.168.0.0/26 handle="k8s-pod-network.e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.183 [INFO][5473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.7/26] handle="k8s-pod-network.e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.183 [INFO][5473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:59.219010 containerd[1831]: 2026-01-23 23:52:59.183 [INFO][5473] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.0.7/26] IPv6=[] ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" HandleID="k8s-pod-network.e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:59.219536 containerd[1831]: 2026-01-23 23:52:59.187 [INFO][5444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-j9z4b" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0", GenerateName:"calico-apiserver-6c8fbd4d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"86223f84-2792-4d18-8124-56ab2f35f54f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8fbd4d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"", Pod:"calico-apiserver-6c8fbd4d54-j9z4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4283efa2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:59.219536 containerd[1831]: 2026-01-23 23:52:59.187 [INFO][5444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.7/32] ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-j9z4b" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:59.219536 containerd[1831]: 2026-01-23 23:52:59.187 [INFO][5444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4283efa2c2 ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-j9z4b" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:59.219536 containerd[1831]: 2026-01-23 23:52:59.197 [INFO][5444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-j9z4b" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:59.219536 containerd[1831]: 2026-01-23 23:52:59.197 [INFO][5444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-j9z4b" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0", GenerateName:"calico-apiserver-6c8fbd4d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"86223f84-2792-4d18-8124-56ab2f35f54f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8fbd4d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca", Pod:"calico-apiserver-6c8fbd4d54-j9z4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4283efa2c2", MAC:"62:14:cf:59:e7:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:59.219536 containerd[1831]: 2026-01-23 23:52:59.215 [INFO][5444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca" Namespace="calico-apiserver" Pod="calico-apiserver-6c8fbd4d54-j9z4b" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:52:59.219536 containerd[1831]: time="2026-01-23T23:52:59.218847723Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:52:59.222581 containerd[1831]: time="2026-01-23T23:52:59.222544773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:52:59.222872 containerd[1831]: time="2026-01-23T23:52:59.222711853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:52:59.223227 kubelet[3365]: E0123 23:52:59.223026 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:52:59.223227 kubelet[3365]: E0123 23:52:59.223074 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:52:59.229308 systemd-networkd[1410]: cali7a55bdeaf48: Gained IPv6LL Jan 23 23:52:59.230617 kubelet[3365]: E0123 23:52:59.230421 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kltkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:52:59.231772 kubelet[3365]: E0123 23:52:59.231633 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:52:59.244988 containerd[1831]: time="2026-01-23T23:52:59.244729429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:59.244988 containerd[1831]: time="2026-01-23T23:52:59.244808789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:59.244988 containerd[1831]: time="2026-01-23T23:52:59.244833909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:59.246419 containerd[1831]: time="2026-01-23T23:52:59.246289393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:59.305154 containerd[1831]: time="2026-01-23T23:52:59.305113222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8fbd4d54-j9z4b,Uid:86223f84-2792-4d18-8124-56ab2f35f54f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca\"" Jan 23 23:52:59.308373 systemd-networkd[1410]: cali692d15e408e: Link UP Jan 23 23:52:59.308909 systemd-networkd[1410]: cali692d15e408e: Gained carrier Jan 23 23:52:59.318925 systemd[1]: run-netns-cni\x2d7e320b18\x2dae2a\x2d32dc\x2d1aff\x2d7e41029f2bf1.mount: Deactivated successfully. Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.091 [INFO][5455] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0 coredns-668d6bf9bc- kube-system d4b92f62-d0ce-4074-b14e-99f94c7e34c5 1021 0 2026-01-23 23:52:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-73953443dc coredns-668d6bf9bc-d2ck2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali692d15e408e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2ck2" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.091 [INFO][5455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2ck2" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.145 [INFO][5471] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" HandleID="k8s-pod-network.44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.146 [INFO][5471] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" HandleID="k8s-pod-network.44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-73953443dc", "pod":"coredns-668d6bf9bc-d2ck2", "timestamp":"2026-01-23 23:52:59.145851218 +0000 UTC"}, Hostname:"ci-4081.3.6-n-73953443dc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.146 [INFO][5471] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.184 [INFO][5471] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.184 [INFO][5471] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-73953443dc' Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.254 [INFO][5471] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.261 [INFO][5471] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.267 [INFO][5471] ipam/ipam.go 511: Trying affinity for 192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.270 [INFO][5471] ipam/ipam.go 158: Attempting to load block cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.272 [INFO][5471] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.0.0/26 host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.273 [INFO][5471] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.0.0/26 handle="k8s-pod-network.44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.275 [INFO][5471] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420 Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.284 [INFO][5471] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.0.0/26 handle="k8s-pod-network.44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.296 [INFO][5471] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.0.8/26] block=192.168.0.0/26 handle="k8s-pod-network.44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.297 [INFO][5471] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.0.8/26] handle="k8s-pod-network.44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" host="ci-4081.3.6-n-73953443dc" Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.297 [INFO][5471] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:52:59.340238 containerd[1831]: 2026-01-23 23:52:59.297 [INFO][5471] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.0.8/26] IPv6=[] ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" HandleID="k8s-pod-network.44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:59.344062 containerd[1831]: 2026-01-23 23:52:59.300 [INFO][5455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2ck2" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4b92f62-d0ce-4074-b14e-99f94c7e34c5", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"", Pod:"coredns-668d6bf9bc-d2ck2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali692d15e408e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:59.344062 containerd[1831]: 2026-01-23 23:52:59.300 [INFO][5455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.0.8/32] ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2ck2" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:59.344062 containerd[1831]: 2026-01-23 23:52:59.301 [INFO][5455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali692d15e408e ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2ck2" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:59.344062 containerd[1831]: 2026-01-23 23:52:59.306 [INFO][5455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2ck2" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:59.344062 containerd[1831]: 2026-01-23 23:52:59.310 [INFO][5455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2ck2" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4b92f62-d0ce-4074-b14e-99f94c7e34c5", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420", Pod:"coredns-668d6bf9bc-d2ck2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali692d15e408e", MAC:"d6:7f:0c:32:70:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:52:59.344062 containerd[1831]: 2026-01-23 23:52:59.332 [INFO][5455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2ck2" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:52:59.359903 containerd[1831]: time="2026-01-23T23:52:59.358986199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:52:59.396193 containerd[1831]: time="2026-01-23T23:52:59.389027195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:52:59.396193 containerd[1831]: time="2026-01-23T23:52:59.389089555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:52:59.396193 containerd[1831]: time="2026-01-23T23:52:59.389100475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:59.396193 containerd[1831]: time="2026-01-23T23:52:59.389199635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:52:59.486761 containerd[1831]: time="2026-01-23T23:52:59.486651682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2ck2,Uid:d4b92f62-d0ce-4074-b14e-99f94c7e34c5,Namespace:kube-system,Attempt:1,} returns sandbox id \"44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420\"" Jan 23 23:52:59.500149 containerd[1831]: time="2026-01-23T23:52:59.500009596Z" level=info msg="CreateContainer within sandbox \"44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:52:59.547339 containerd[1831]: time="2026-01-23T23:52:59.547096876Z" level=info msg="CreateContainer within sandbox \"44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d257c2b842d8ed7f1bf206befd9f86759309c7826d414c2b528abab33d86888c\"" Jan 23 23:52:59.548695 containerd[1831]: time="2026-01-23T23:52:59.548666280Z" level=info msg="StartContainer for \"d257c2b842d8ed7f1bf206befd9f86759309c7826d414c2b528abab33d86888c\"" Jan 23 23:52:59.660447 containerd[1831]: time="2026-01-23T23:52:59.660398887Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:52:59.663958 containerd[1831]: time="2026-01-23T23:52:59.663553216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:52:59.663958 containerd[1831]: time="2026-01-23T23:52:59.663692536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:52:59.664637 kubelet[3365]: E0123 23:52:59.664577 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:52:59.666468 kubelet[3365]: E0123 23:52:59.664645 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:52:59.666468 kubelet[3365]: E0123 23:52:59.664764 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgwjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8fbd4d54-j9z4b_calico-apiserver(86223f84-2792-4d18-8124-56ab2f35f54f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:52:59.666468 kubelet[3365]: E0123 23:52:59.666287 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:52:59.668700 containerd[1831]: time="2026-01-23T23:52:59.668339668Z" level=info msg="StartContainer for \"d257c2b842d8ed7f1bf206befd9f86759309c7826d414c2b528abab33d86888c\" returns successfully" Jan 23 23:52:59.867984 systemd-networkd[1410]: calib5a6c6457f4: Gained IPv6LL Jan 23 23:53:00.084830 kubelet[3365]: E0123 23:53:00.084791 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:53:00.088639 kubelet[3365]: E0123 23:53:00.088601 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:53:00.089707 kubelet[3365]: E0123 23:53:00.089372 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:53:00.176949 kubelet[3365]: I0123 23:53:00.176573 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d2ck2" podStartSLOduration=48.176553941 podStartE2EDuration="48.176553941s" podCreationTimestamp="2026-01-23 23:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:53:00.151209513 +0000 UTC m=+53.440070239" watchObservedRunningTime="2026-01-23 23:53:00.176553941 +0000 UTC m=+53.465414627" Jan 23 23:53:00.444046 systemd-networkd[1410]: calie4283efa2c2: Gained IPv6LL Jan 23 23:53:00.635978 systemd-networkd[1410]: cali692d15e408e: Gained IPv6LL Jan 23 23:53:01.092107 kubelet[3365]: E0123 23:53:01.092067 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:53:06.828755 containerd[1831]: time="2026-01-23T23:53:06.827913682Z" level=info msg="StopPodSandbox for \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\"" Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.869 [WARNING][5643] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0", GenerateName:"calico-apiserver-6c8fbd4d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"86223f84-2792-4d18-8124-56ab2f35f54f", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8fbd4d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca", Pod:"calico-apiserver-6c8fbd4d54-j9z4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4283efa2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.869 [INFO][5643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.869 [INFO][5643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" iface="eth0" netns="" Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.869 [INFO][5643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.869 [INFO][5643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.890 [INFO][5650] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" HandleID="k8s-pod-network.d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.891 [INFO][5650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.891 [INFO][5650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.902 [WARNING][5650] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" HandleID="k8s-pod-network.d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.902 [INFO][5650] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" HandleID="k8s-pod-network.d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.905 [INFO][5650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:06.908748 containerd[1831]: 2026-01-23 23:53:06.907 [INFO][5643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:53:06.909287 containerd[1831]: time="2026-01-23T23:53:06.909261739Z" level=info msg="TearDown network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\" successfully" Jan 23 23:53:06.909340 containerd[1831]: time="2026-01-23T23:53:06.909328539Z" level=info msg="StopPodSandbox for \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\" returns successfully" Jan 23 23:53:06.913125 containerd[1831]: time="2026-01-23T23:53:06.913054149Z" level=info msg="RemovePodSandbox for \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\"" Jan 23 23:53:06.913229 containerd[1831]: time="2026-01-23T23:53:06.913132189Z" level=info msg="Forcibly stopping sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\"" Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.946 [WARNING][5664] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0", GenerateName:"calico-apiserver-6c8fbd4d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"86223f84-2792-4d18-8124-56ab2f35f54f", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8fbd4d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"e1cc719499646739fc11cf0d3650da99c6e2f0ee3649a8d8faff84dbc5c1afca", Pod:"calico-apiserver-6c8fbd4d54-j9z4b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4283efa2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.947 [INFO][5664] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.947 [INFO][5664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" iface="eth0" netns="" Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.947 [INFO][5664] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.947 [INFO][5664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.966 [INFO][5671] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" HandleID="k8s-pod-network.d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.966 [INFO][5671] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.966 [INFO][5671] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.975 [WARNING][5671] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" HandleID="k8s-pod-network.d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.975 [INFO][5671] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" HandleID="k8s-pod-network.d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--j9z4b-eth0" Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.977 [INFO][5671] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:06.980827 containerd[1831]: 2026-01-23 23:53:06.978 [INFO][5664] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27" Jan 23 23:53:06.980827 containerd[1831]: time="2026-01-23T23:53:06.980729089Z" level=info msg="TearDown network for sandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\" successfully" Jan 23 23:53:06.996234 containerd[1831]: time="2026-01-23T23:53:06.996067210Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:53:06.996234 containerd[1831]: time="2026-01-23T23:53:06.996134050Z" level=info msg="RemovePodSandbox \"d9240c9267a873cfd4f178e8c3dc545aabb1f0088745fb913d5a2f1cbd2d4e27\" returns successfully" Jan 23 23:53:06.997367 containerd[1831]: time="2026-01-23T23:53:06.997095813Z" level=info msg="StopPodSandbox for \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\"" Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.027 [WARNING][5685] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e6b6e508-b275-4ee9-aa24-d58a31eb441c", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e", Pod:"goldmane-666569f655-mj4bl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia2bb7d55976", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.027 [INFO][5685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.027 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" iface="eth0" netns="" Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.027 [INFO][5685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.027 [INFO][5685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.045 [INFO][5692] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" HandleID="k8s-pod-network.04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.046 [INFO][5692] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.046 [INFO][5692] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.054 [WARNING][5692] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" HandleID="k8s-pod-network.04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.054 [INFO][5692] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" HandleID="k8s-pod-network.04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.056 [INFO][5692] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.060841 containerd[1831]: 2026-01-23 23:53:07.058 [INFO][5685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:53:07.062531 containerd[1831]: time="2026-01-23T23:53:07.062195506Z" level=info msg="TearDown network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\" successfully" Jan 23 23:53:07.062531 containerd[1831]: time="2026-01-23T23:53:07.062228986Z" level=info msg="StopPodSandbox for \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\" returns successfully" Jan 23 23:53:07.063176 containerd[1831]: time="2026-01-23T23:53:07.062720347Z" level=info msg="RemovePodSandbox for \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\"" Jan 23 23:53:07.063176 containerd[1831]: time="2026-01-23T23:53:07.062746467Z" level=info msg="Forcibly stopping sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\"" Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.113 [WARNING][5706] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e6b6e508-b275-4ee9-aa24-d58a31eb441c", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"78213424a2dd63442bb92be2da68ce47680eef4e0d1ef95775156a8ebed8831e", Pod:"goldmane-666569f655-mj4bl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.0.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia2bb7d55976", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.113 [INFO][5706] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.113 [INFO][5706] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" iface="eth0" netns="" Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.113 [INFO][5706] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.113 [INFO][5706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.142 [INFO][5713] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" HandleID="k8s-pod-network.04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.142 [INFO][5713] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.142 [INFO][5713] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.151 [WARNING][5713] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" HandleID="k8s-pod-network.04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.151 [INFO][5713] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" HandleID="k8s-pod-network.04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Workload="ci--4081.3.6--n--73953443dc-k8s-goldmane--666569f655--mj4bl-eth0" Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.153 [INFO][5713] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.157640 containerd[1831]: 2026-01-23 23:53:07.154 [INFO][5706] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004" Jan 23 23:53:07.159400 containerd[1831]: time="2026-01-23T23:53:07.158105761Z" level=info msg="TearDown network for sandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\" successfully" Jan 23 23:53:07.176895 containerd[1831]: time="2026-01-23T23:53:07.176834451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:53:07.177108 containerd[1831]: time="2026-01-23T23:53:07.177092172Z" level=info msg="RemovePodSandbox \"04a111cf76b390347c3e5a3798df76be3c5ebffd326ca4da42978423d5658004\" returns successfully" Jan 23 23:53:07.177670 containerd[1831]: time="2026-01-23T23:53:07.177645573Z" level=info msg="StopPodSandbox for \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\"" Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.210 [WARNING][5727] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0", GenerateName:"calico-kube-controllers-6bfff8d8c9-", Namespace:"calico-system", SelfLink:"", UID:"4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bfff8d8c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b", Pod:"calico-kube-controllers-6bfff8d8c9-qd78x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b2b84d26f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.210 [INFO][5727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.210 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" iface="eth0" netns="" Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.210 [INFO][5727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.210 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.232 [INFO][5734] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" HandleID="k8s-pod-network.7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.232 [INFO][5734] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.232 [INFO][5734] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.241 [WARNING][5734] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" HandleID="k8s-pod-network.7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.241 [INFO][5734] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" HandleID="k8s-pod-network.7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.242 [INFO][5734] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.246299 containerd[1831]: 2026-01-23 23:53:07.244 [INFO][5727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:53:07.246945 containerd[1831]: time="2026-01-23T23:53:07.246364116Z" level=info msg="TearDown network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\" successfully" Jan 23 23:53:07.246945 containerd[1831]: time="2026-01-23T23:53:07.246387716Z" level=info msg="StopPodSandbox for \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\" returns successfully" Jan 23 23:53:07.246945 containerd[1831]: time="2026-01-23T23:53:07.246824117Z" level=info msg="RemovePodSandbox for \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\"" Jan 23 23:53:07.246945 containerd[1831]: time="2026-01-23T23:53:07.246885317Z" level=info msg="Forcibly stopping sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\"" Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.278 [WARNING][5748] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0", GenerateName:"calico-kube-controllers-6bfff8d8c9-", Namespace:"calico-system", SelfLink:"", UID:"4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bfff8d8c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"8928b0a677d41afc0175446d40ea26e63166f939d7a26ca2e7cb19d8558d286b", Pod:"calico-kube-controllers-6bfff8d8c9-qd78x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.0.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b2b84d26f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.278 [INFO][5748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.278 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" iface="eth0" netns="" Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.278 [INFO][5748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.278 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.297 [INFO][5755] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" HandleID="k8s-pod-network.7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.297 [INFO][5755] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.297 [INFO][5755] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.306 [WARNING][5755] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" HandleID="k8s-pod-network.7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.306 [INFO][5755] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" HandleID="k8s-pod-network.7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--kube--controllers--6bfff8d8c9--qd78x-eth0" Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.307 [INFO][5755] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.311121 containerd[1831]: 2026-01-23 23:53:07.309 [INFO][5748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411" Jan 23 23:53:07.311523 containerd[1831]: time="2026-01-23T23:53:07.311151928Z" level=info msg="TearDown network for sandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\" successfully" Jan 23 23:53:07.318245 containerd[1831]: time="2026-01-23T23:53:07.318193747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:53:07.318472 containerd[1831]: time="2026-01-23T23:53:07.318255907Z" level=info msg="RemovePodSandbox \"7bec99e85baba752ca27a4ad8a89de1c5375cf1bba6153e13a4d6674ba143411\" returns successfully" Jan 23 23:53:07.321066 containerd[1831]: time="2026-01-23T23:53:07.320781514Z" level=info msg="StopPodSandbox for \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\"" Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.353 [WARNING][5769] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d85237ab-62c3-4029-9724-6c41efba9b29", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a", Pod:"csi-node-driver-zg499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5a6c6457f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.353 [INFO][5769] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.353 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" iface="eth0" netns="" Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.353 [INFO][5769] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.353 [INFO][5769] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.371 [INFO][5776] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" HandleID="k8s-pod-network.7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.371 [INFO][5776] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.372 [INFO][5776] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.381 [WARNING][5776] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" HandleID="k8s-pod-network.7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.381 [INFO][5776] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" HandleID="k8s-pod-network.7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.382 [INFO][5776] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.386671 containerd[1831]: 2026-01-23 23:53:07.384 [INFO][5769] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:53:07.387375 containerd[1831]: time="2026-01-23T23:53:07.386713890Z" level=info msg="TearDown network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\" successfully" Jan 23 23:53:07.387375 containerd[1831]: time="2026-01-23T23:53:07.386738850Z" level=info msg="StopPodSandbox for \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\" returns successfully" Jan 23 23:53:07.387944 containerd[1831]: time="2026-01-23T23:53:07.387608132Z" level=info msg="RemovePodSandbox for \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\"" Jan 23 23:53:07.387944 containerd[1831]: time="2026-01-23T23:53:07.387636772Z" level=info msg="Forcibly stopping sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\"" Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.421 [WARNING][5790] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d85237ab-62c3-4029-9724-6c41efba9b29", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"18140e7f38ee3cbfb327ed76999d3a48a967bc38c089b7568df61f436f594a2a", Pod:"csi-node-driver-zg499", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.0.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib5a6c6457f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.421 [INFO][5790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.421 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" iface="eth0" netns="" Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.421 [INFO][5790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.421 [INFO][5790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.438 [INFO][5797] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" HandleID="k8s-pod-network.7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.438 [INFO][5797] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.438 [INFO][5797] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.447 [WARNING][5797] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" HandleID="k8s-pod-network.7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.447 [INFO][5797] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" HandleID="k8s-pod-network.7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Workload="ci--4081.3.6--n--73953443dc-k8s-csi--node--driver--zg499-eth0" Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.449 [INFO][5797] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.452667 containerd[1831]: 2026-01-23 23:53:07.450 [INFO][5790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f" Jan 23 23:53:07.453064 containerd[1831]: time="2026-01-23T23:53:07.452656785Z" level=info msg="TearDown network for sandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\" successfully" Jan 23 23:53:07.461620 containerd[1831]: time="2026-01-23T23:53:07.461580809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:53:07.461715 containerd[1831]: time="2026-01-23T23:53:07.461640209Z" level=info msg="RemovePodSandbox \"7bb799e4b363beec6246ab5f548dde8bff8af14efbad27454ddde67f39858c0f\" returns successfully" Jan 23 23:53:07.462122 containerd[1831]: time="2026-01-23T23:53:07.462078010Z" level=info msg="StopPodSandbox for \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\"" Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.494 [WARNING][5811] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.494 [INFO][5811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.494 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" iface="eth0" netns="" Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.494 [INFO][5811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.494 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.511 [INFO][5818] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" HandleID="k8s-pod-network.c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.511 [INFO][5818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.511 [INFO][5818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.520 [WARNING][5818] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" HandleID="k8s-pod-network.c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.520 [INFO][5818] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" HandleID="k8s-pod-network.c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.521 [INFO][5818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.525503 containerd[1831]: 2026-01-23 23:53:07.523 [INFO][5811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:53:07.525503 containerd[1831]: time="2026-01-23T23:53:07.525407379Z" level=info msg="TearDown network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\" successfully" Jan 23 23:53:07.525503 containerd[1831]: time="2026-01-23T23:53:07.525431259Z" level=info msg="StopPodSandbox for \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\" returns successfully" Jan 23 23:53:07.526280 containerd[1831]: time="2026-01-23T23:53:07.526252741Z" level=info msg="RemovePodSandbox for \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\"" Jan 23 23:53:07.526336 containerd[1831]: time="2026-01-23T23:53:07.526287741Z" level=info msg="Forcibly stopping sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\"" Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.559 [WARNING][5832] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" WorkloadEndpoint="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.559 [INFO][5832] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.559 [INFO][5832] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" iface="eth0" netns="" Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.559 [INFO][5832] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.559 [INFO][5832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.577 [INFO][5839] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" HandleID="k8s-pod-network.c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.578 [INFO][5839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.578 [INFO][5839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.587 [WARNING][5839] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" HandleID="k8s-pod-network.c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.587 [INFO][5839] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" HandleID="k8s-pod-network.c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Workload="ci--4081.3.6--n--73953443dc-k8s-whisker--7c5ccfd59c--h6qxv-eth0" Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.588 [INFO][5839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.592905 containerd[1831]: 2026-01-23 23:53:07.590 [INFO][5832] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb" Jan 23 23:53:07.592905 containerd[1831]: time="2026-01-23T23:53:07.592019436Z" level=info msg="TearDown network for sandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\" successfully" Jan 23 23:53:07.611525 containerd[1831]: time="2026-01-23T23:53:07.611479048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:53:07.611750 containerd[1831]: time="2026-01-23T23:53:07.611733888Z" level=info msg="RemovePodSandbox \"c28bed30dcdc03571c8827a87bafe82efd3e689907100170fa06587f4c1735bb\" returns successfully" Jan 23 23:53:07.612262 containerd[1831]: time="2026-01-23T23:53:07.612238850Z" level=info msg="StopPodSandbox for \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\"" Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.646 [WARNING][5853] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0", GenerateName:"calico-apiserver-6c8fbd4d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e432e42b-559a-473b-8e55-fe59b8af82e5", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8fbd4d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01", Pod:"calico-apiserver-6c8fbd4d54-2csvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a55bdeaf48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.646 [INFO][5853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.646 [INFO][5853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" iface="eth0" netns="" Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.646 [INFO][5853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.646 [INFO][5853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.665 [INFO][5861] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" HandleID="k8s-pod-network.903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.666 [INFO][5861] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.666 [INFO][5861] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.674 [WARNING][5861] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" HandleID="k8s-pod-network.903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.674 [INFO][5861] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" HandleID="k8s-pod-network.903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.676 [INFO][5861] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.680106 containerd[1831]: 2026-01-23 23:53:07.677 [INFO][5853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:53:07.680504 containerd[1831]: time="2026-01-23T23:53:07.680155271Z" level=info msg="TearDown network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\" successfully" Jan 23 23:53:07.680504 containerd[1831]: time="2026-01-23T23:53:07.680181311Z" level=info msg="StopPodSandbox for \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\" returns successfully" Jan 23 23:53:07.680789 containerd[1831]: time="2026-01-23T23:53:07.680768552Z" level=info msg="RemovePodSandbox for \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\"" Jan 23 23:53:07.680827 containerd[1831]: time="2026-01-23T23:53:07.680798632Z" level=info msg="Forcibly stopping sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\"" Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.712 [WARNING][5875] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0", GenerateName:"calico-apiserver-6c8fbd4d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"e432e42b-559a-473b-8e55-fe59b8af82e5", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8fbd4d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"a6c28c66e867485686527d51501d23e8bdde963e5ea309e2792e6d9d5b5bcf01", Pod:"calico-apiserver-6c8fbd4d54-2csvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.0.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a55bdeaf48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.713 [INFO][5875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.713 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" iface="eth0" netns="" Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.713 [INFO][5875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.713 [INFO][5875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.730 [INFO][5882] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" HandleID="k8s-pod-network.903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.730 [INFO][5882] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.730 [INFO][5882] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.739 [WARNING][5882] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" HandleID="k8s-pod-network.903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.739 [INFO][5882] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" HandleID="k8s-pod-network.903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Workload="ci--4081.3.6--n--73953443dc-k8s-calico--apiserver--6c8fbd4d54--2csvx-eth0" Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.740 [INFO][5882] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.743960 containerd[1831]: 2026-01-23 23:53:07.742 [INFO][5875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18" Jan 23 23:53:07.744325 containerd[1831]: time="2026-01-23T23:53:07.743950280Z" level=info msg="TearDown network for sandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\" successfully" Jan 23 23:53:07.755242 containerd[1831]: time="2026-01-23T23:53:07.755201670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:53:07.755357 containerd[1831]: time="2026-01-23T23:53:07.755267990Z" level=info msg="RemovePodSandbox \"903de875e640656c976f238bc5962eaf674b4bfd63ff968d32da9c6beaf48b18\" returns successfully" Jan 23 23:53:07.755779 containerd[1831]: time="2026-01-23T23:53:07.755652431Z" level=info msg="StopPodSandbox for \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\"" Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.790 [WARNING][5898] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4b92f62-d0ce-4074-b14e-99f94c7e34c5", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420", Pod:"coredns-668d6bf9bc-d2ck2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali692d15e408e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.790 [INFO][5898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.790 [INFO][5898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" iface="eth0" netns="" Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.790 [INFO][5898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.790 [INFO][5898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.809 [INFO][5905] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" HandleID="k8s-pod-network.939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.809 [INFO][5905] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.809 [INFO][5905] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.823 [WARNING][5905] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" HandleID="k8s-pod-network.939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.823 [INFO][5905] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" HandleID="k8s-pod-network.939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.826 [INFO][5905] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.831479 containerd[1831]: 2026-01-23 23:53:07.829 [INFO][5898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:53:07.832147 containerd[1831]: time="2026-01-23T23:53:07.831529073Z" level=info msg="TearDown network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\" successfully" Jan 23 23:53:07.832147 containerd[1831]: time="2026-01-23T23:53:07.831554233Z" level=info msg="StopPodSandbox for \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\" returns successfully" Jan 23 23:53:07.832147 containerd[1831]: time="2026-01-23T23:53:07.832007075Z" level=info msg="RemovePodSandbox for \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\"" Jan 23 23:53:07.832147 containerd[1831]: time="2026-01-23T23:53:07.832038795Z" level=info msg="Forcibly stopping sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\"" Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.863 [WARNING][5919] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4b92f62-d0ce-4074-b14e-99f94c7e34c5", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"44893dd2f671c4847a062b0166efe0498efbf4df916e0bb6fc223f4b40c51420", Pod:"coredns-668d6bf9bc-d2ck2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali692d15e408e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.863 [INFO][5919] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.863 [INFO][5919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" iface="eth0" netns="" Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.863 [INFO][5919] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.863 [INFO][5919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.884 [INFO][5926] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" HandleID="k8s-pod-network.939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.884 [INFO][5926] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.884 [INFO][5926] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.892 [WARNING][5926] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" HandleID="k8s-pod-network.939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.892 [INFO][5926] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" HandleID="k8s-pod-network.939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--d2ck2-eth0" Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.894 [INFO][5926] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.897890 containerd[1831]: 2026-01-23 23:53:07.896 [INFO][5919] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe" Jan 23 23:53:07.897890 containerd[1831]: time="2026-01-23T23:53:07.897637169Z" level=info msg="TearDown network for sandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\" successfully" Jan 23 23:53:07.905366 containerd[1831]: time="2026-01-23T23:53:07.905322750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:53:07.905512 containerd[1831]: time="2026-01-23T23:53:07.905386630Z" level=info msg="RemovePodSandbox \"939c290af44b214c51df9626c1e058dd5a2371a8bc13a682a143d5161065e3fe\" returns successfully" Jan 23 23:53:07.905883 containerd[1831]: time="2026-01-23T23:53:07.905848391Z" level=info msg="StopPodSandbox for \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\"" Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.938 [WARNING][5940] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ab0fffca-fdb6-48fb-890d-1befd8d9f70b", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64", Pod:"coredns-668d6bf9bc-2mmfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75629270245", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.939 [INFO][5940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.939 [INFO][5940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" iface="eth0" netns="" Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.939 [INFO][5940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.939 [INFO][5940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.957 [INFO][5947] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" HandleID="k8s-pod-network.0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.957 [INFO][5947] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.958 [INFO][5947] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.966 [WARNING][5947] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" HandleID="k8s-pod-network.0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.966 [INFO][5947] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" HandleID="k8s-pod-network.0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.968 [INFO][5947] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:07.972090 containerd[1831]: 2026-01-23 23:53:07.970 [INFO][5940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:53:07.972090 containerd[1831]: time="2026-01-23T23:53:07.971972527Z" level=info msg="TearDown network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\" successfully" Jan 23 23:53:07.972090 containerd[1831]: time="2026-01-23T23:53:07.971997367Z" level=info msg="StopPodSandbox for \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\" returns successfully" Jan 23 23:53:07.973216 containerd[1831]: time="2026-01-23T23:53:07.972951170Z" level=info msg="RemovePodSandbox for \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\"" Jan 23 23:53:07.973216 containerd[1831]: time="2026-01-23T23:53:07.972980770Z" level=info msg="Forcibly stopping sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\"" Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.008 [WARNING][5961] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ab0fffca-fdb6-48fb-890d-1befd8d9f70b", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-73953443dc", ContainerID:"66b8b975c610f5f1256269c76e690812383fa58e967b4dccff76400692349a64", Pod:"coredns-668d6bf9bc-2mmfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.0.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75629270245", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.008 [INFO][5961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.008 [INFO][5961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" iface="eth0" netns="" Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.008 [INFO][5961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.008 [INFO][5961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.029 [INFO][5968] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" HandleID="k8s-pod-network.0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.030 [INFO][5968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.030 [INFO][5968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.039 [WARNING][5968] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" HandleID="k8s-pod-network.0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.039 [INFO][5968] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" HandleID="k8s-pod-network.0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Workload="ci--4081.3.6--n--73953443dc-k8s-coredns--668d6bf9bc--2mmfw-eth0" Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.040 [INFO][5968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:53:08.044404 containerd[1831]: 2026-01-23 23:53:08.042 [INFO][5961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11" Jan 23 23:53:08.046131 containerd[1831]: time="2026-01-23T23:53:08.045050962Z" level=info msg="TearDown network for sandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\" successfully" Jan 23 23:53:08.053714 containerd[1831]: time="2026-01-23T23:53:08.053676105Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:53:08.053954 containerd[1831]: time="2026-01-23T23:53:08.053932945Z" level=info msg="RemovePodSandbox \"0fdc7d24673e7bec72a62573c83ba577ae42bd72fe43ec0623fc23b8b097ef11\" returns successfully" Jan 23 23:53:09.826120 containerd[1831]: time="2026-01-23T23:53:09.826043861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:53:10.151411 containerd[1831]: time="2026-01-23T23:53:10.151224646Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:10.154277 containerd[1831]: time="2026-01-23T23:53:10.154227734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:53:10.154377 containerd[1831]: time="2026-01-23T23:53:10.154334855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:53:10.154697 kubelet[3365]: E0123 23:53:10.154465 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:53:10.154697 kubelet[3365]: E0123 23:53:10.154521 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:53:10.156548 containerd[1831]: time="2026-01-23T23:53:10.155292657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:53:10.156682 kubelet[3365]: E0123 23:53:10.156528 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc17560a7d374cc8a5379ccc150c7fc7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hl6hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bffcb8bdf-jghmp_calico-system(020e3cf1-f5a2-4e03-b3f1-21fc39350338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:10.403368 containerd[1831]: time="2026-01-23T23:53:10.403132477Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:10.406219 containerd[1831]: time="2026-01-23T23:53:10.406115005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:53:10.406219 containerd[1831]: time="2026-01-23T23:53:10.406180805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:53:10.406364 kubelet[3365]: E0123 23:53:10.406314 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:53:10.406402 kubelet[3365]: E0123 23:53:10.406360 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:53:10.407503 kubelet[3365]: E0123 23:53:10.406583 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlpxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mj4bl_calico-system(e6b6e508-b275-4ee9-aa24-d58a31eb441c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:10.407670 containerd[1831]: time="2026-01-23T23:53:10.407084727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:53:10.408519 kubelet[3365]: E0123 23:53:10.407886 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:53:10.649904 containerd[1831]: time="2026-01-23T23:53:10.649747613Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:10.652800 containerd[1831]: time="2026-01-23T23:53:10.652767861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:53:10.652972 containerd[1831]: time="2026-01-23T23:53:10.652840981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:53:10.653011 kubelet[3365]: E0123 23:53:10.652954 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:53:10.653011 kubelet[3365]: E0123 23:53:10.653000 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:53:10.653159 kubelet[3365]: E0123 23:53:10.653113 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl6hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bffcb8bdf-jghmp_calico-system(020e3cf1-f5a2-4e03-b3f1-21fc39350338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:10.654518 kubelet[3365]: E0123 23:53:10.654204 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:53:12.818594 containerd[1831]: time="2026-01-23T23:53:12.818523664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:53:13.063881 containerd[1831]: time="2026-01-23T23:53:13.063742717Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:13.067182 containerd[1831]: time="2026-01-23T23:53:13.067139446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:53:13.067264 containerd[1831]: time="2026-01-23T23:53:13.067237166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:53:13.067433 kubelet[3365]: E0123 23:53:13.067394 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:53:13.067757 kubelet[3365]: E0123 23:53:13.067447 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:53:13.067757 kubelet[3365]: E0123 23:53:13.067655 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nffqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8fbd4d54-2csvx_calico-apiserver(e432e42b-559a-473b-8e55-fe59b8af82e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:13.068968 kubelet[3365]: E0123 23:53:13.068812 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:53:13.069067 containerd[1831]: time="2026-01-23T23:53:13.068821850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:53:13.329648 containerd[1831]: time="2026-01-23T23:53:13.329525624Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:13.332643 containerd[1831]: time="2026-01-23T23:53:13.332593112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:53:13.332737 containerd[1831]: time="2026-01-23T23:53:13.332691033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:53:13.332904 kubelet[3365]: E0123 23:53:13.332852 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:53:13.333021 kubelet[3365]: E0123 23:53:13.332916 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:53:13.333946 kubelet[3365]: E0123 23:53:13.333043 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfbdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bfff8d8c9-qd78x_calico-system(4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:13.334387 kubelet[3365]: E0123 23:53:13.334333 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:53:15.817090 containerd[1831]: time="2026-01-23T23:53:15.817046156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:53:16.053304 containerd[1831]: time="2026-01-23T23:53:16.053123933Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:16.056644 containerd[1831]: time="2026-01-23T23:53:16.056535502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:53:16.056770 containerd[1831]: time="2026-01-23T23:53:16.056632942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:53:16.056798 kubelet[3365]: E0123 23:53:16.056763 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:53:16.057106 kubelet[3365]: E0123 23:53:16.056810 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:53:16.057106 kubelet[3365]: E0123 23:53:16.057030 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kltkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:16.057661 containerd[1831]: time="2026-01-23T23:53:16.057426784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:53:16.358646 containerd[1831]: time="2026-01-23T23:53:16.358598691Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:16.361731 containerd[1831]: time="2026-01-23T23:53:16.361693059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:53:16.361806 containerd[1831]: time="2026-01-23T23:53:16.361785299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:53:16.362142 kubelet[3365]: E0123 23:53:16.361933 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:53:16.362142 kubelet[3365]: E0123 23:53:16.361988 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:53:16.362264 kubelet[3365]: E0123 23:53:16.362197 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgwjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8fbd4d54-j9z4b_calico-apiserver(86223f84-2792-4d18-8124-56ab2f35f54f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:16.363368 containerd[1831]: time="2026-01-23T23:53:16.363328743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:53:16.363654 kubelet[3365]: E0123 23:53:16.363617 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:53:16.630230 containerd[1831]: time="2026-01-23T23:53:16.630074080Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:16.633356 containerd[1831]: time="2026-01-23T23:53:16.633310609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:53:16.633494 containerd[1831]: time="2026-01-23T23:53:16.633335289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:53:16.633534 kubelet[3365]: E0123 23:53:16.633500 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:53:16.633579 kubelet[3365]: E0123 23:53:16.633543 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:53:16.633693 kubelet[3365]: E0123 23:53:16.633645 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kltkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:16.634975 kubelet[3365]: E0123 23:53:16.634911 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:53:21.816961 kubelet[3365]: E0123 23:53:21.816900 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:53:23.817473 kubelet[3365]: E0123 23:53:23.816819 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:53:25.821159 kubelet[3365]: E0123 23:53:25.821038 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:53:26.829176 kubelet[3365]: E0123 23:53:26.825420 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:53:29.818268 kubelet[3365]: E0123 23:53:29.818205 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:53:31.818932 kubelet[3365]: E0123 23:53:31.817593 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:53:32.821490 containerd[1831]: time="2026-01-23T23:53:32.820582798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:53:33.067695 containerd[1831]: time="2026-01-23T23:53:33.067650565Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:33.071644 containerd[1831]: time="2026-01-23T23:53:33.071451174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:53:33.072161 containerd[1831]: time="2026-01-23T23:53:33.071556054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:53:33.072338 kubelet[3365]: E0123 23:53:33.072296 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:53:33.073746 kubelet[3365]: E0123 23:53:33.072347 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:53:33.073746 kubelet[3365]: E0123 23:53:33.072511 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlpxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mj4bl_calico-system(e6b6e508-b275-4ee9-aa24-d58a31eb441c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:33.074331 kubelet[3365]: E0123 23:53:33.073937 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:53:37.816986 containerd[1831]: time="2026-01-23T23:53:37.816907225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:53:38.083954 containerd[1831]: time="2026-01-23T23:53:38.083580440Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:38.087434 containerd[1831]: time="2026-01-23T23:53:38.087294849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:53:38.087434 containerd[1831]: time="2026-01-23T23:53:38.087400849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:53:38.087764 kubelet[3365]: E0123 23:53:38.087717 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:53:38.088084 kubelet[3365]: E0123 23:53:38.087944 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:53:38.088115 kubelet[3365]: E0123 23:53:38.088076 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfbdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bfff8d8c9-qd78x_calico-system(4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:38.089888 kubelet[3365]: E0123 23:53:38.089253 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:53:38.817453 containerd[1831]: time="2026-01-23T23:53:38.817419802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:53:39.061520 containerd[1831]: time="2026-01-23T23:53:39.061454401Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:39.066069 containerd[1831]: time="2026-01-23T23:53:39.066007492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:53:39.066205 containerd[1831]: time="2026-01-23T23:53:39.066129812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:53:39.066832 kubelet[3365]: E0123 23:53:39.066342 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:53:39.066832 kubelet[3365]: E0123 23:53:39.066398 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:53:39.066832 kubelet[3365]: E0123 23:53:39.066490 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc17560a7d374cc8a5379ccc150c7fc7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hl6hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bffcb8bdf-jghmp_calico-system(020e3cf1-f5a2-4e03-b3f1-21fc39350338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:39.069878 containerd[1831]: time="2026-01-23T23:53:39.069779621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:53:39.372239 containerd[1831]: time="2026-01-23T23:53:39.371930563Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:39.376368 containerd[1831]: time="2026-01-23T23:53:39.376229574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:53:39.376368 containerd[1831]: time="2026-01-23T23:53:39.376339254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:53:39.377177 kubelet[3365]: E0123 23:53:39.376659 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:53:39.377177 kubelet[3365]: E0123 23:53:39.376719 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:53:39.377177 kubelet[3365]: E0123 23:53:39.376821 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl6hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bffcb8bdf-jghmp_calico-system(020e3cf1-f5a2-4e03-b3f1-21fc39350338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:39.379897 kubelet[3365]: E0123 23:53:39.378720 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:53:39.819886 containerd[1831]: time="2026-01-23T23:53:39.818051354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:53:40.087244 containerd[1831]: time="2026-01-23T23:53:40.086904647Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:40.090453 containerd[1831]: time="2026-01-23T23:53:40.090408216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:53:40.090565 containerd[1831]: time="2026-01-23T23:53:40.090539176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:53:40.090908 kubelet[3365]: E0123 23:53:40.090674 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:53:40.090908 kubelet[3365]: E0123 23:53:40.090725 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:53:40.090908 kubelet[3365]: E0123 23:53:40.090847 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nffqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8fbd4d54-2csvx_calico-apiserver(e432e42b-559a-473b-8e55-fe59b8af82e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:40.092920 kubelet[3365]: E0123 23:53:40.092884 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:53:44.824878 containerd[1831]: time="2026-01-23T23:53:44.823307121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:53:45.082000 containerd[1831]: time="2026-01-23T23:53:45.081735867Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:45.085398 containerd[1831]: time="2026-01-23T23:53:45.085355956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:53:45.085474 containerd[1831]: time="2026-01-23T23:53:45.085448796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:53:45.085635 kubelet[3365]: E0123 23:53:45.085599 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:53:45.086000 kubelet[3365]: E0123 23:53:45.085648 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:53:45.086000 kubelet[3365]: E0123 23:53:45.085757 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kltkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:45.088698 containerd[1831]: time="2026-01-23T23:53:45.088671044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:53:45.317815 containerd[1831]: time="2026-01-23T23:53:45.317445633Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:45.321782 containerd[1831]: time="2026-01-23T23:53:45.321405124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:53:45.321782 containerd[1831]: time="2026-01-23T23:53:45.321515964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:53:45.323045 kubelet[3365]: E0123 23:53:45.322990 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:53:45.323141 kubelet[3365]: E0123 23:53:45.323057 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:53:45.323242 kubelet[3365]: E0123 23:53:45.323197 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kltkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:45.324786 kubelet[3365]: E0123 23:53:45.324731 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:53:46.817166 containerd[1831]: time="2026-01-23T23:53:46.816688613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:53:47.102120 containerd[1831]: time="2026-01-23T23:53:47.101880148Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:53:47.106936 containerd[1831]: time="2026-01-23T23:53:47.106779760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:53:47.106936 containerd[1831]: time="2026-01-23T23:53:47.106903841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:53:47.109475 kubelet[3365]: E0123 23:53:47.109016 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:53:47.109475 kubelet[3365]: E0123 23:53:47.109063 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:53:47.110033 kubelet[3365]: E0123 23:53:47.109949 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgwjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8fbd4d54-j9z4b_calico-apiserver(86223f84-2792-4d18-8124-56ab2f35f54f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:53:47.111138 kubelet[3365]: E0123 23:53:47.111085 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:53:47.819733 kubelet[3365]: E0123 23:53:47.819354 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:53:50.823444 kubelet[3365]: E0123 23:53:50.820136 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:53:50.831754 systemd[1]: Started sshd@7-10.200.20.33:22-10.200.16.10:45260.service - OpenSSH per-connection server daemon (10.200.16.10:45260). Jan 23 23:53:51.302742 sshd[6043]: Accepted publickey for core from 10.200.16.10 port 45260 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:51.307162 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:51.315498 systemd-logind[1805]: New session 10 of user core. Jan 23 23:53:51.320119 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:53:51.712953 sshd[6043]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:51.720172 systemd[1]: sshd@7-10.200.20.33:22-10.200.16.10:45260.service: Deactivated successfully. Jan 23 23:53:51.724766 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:53:51.727708 systemd-logind[1805]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:53:51.728847 systemd-logind[1805]: Removed session 10. Jan 23 23:53:51.816571 kubelet[3365]: E0123 23:53:51.816269 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:53:53.819878 kubelet[3365]: E0123 23:53:53.819049 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:53:56.803183 systemd[1]: Started sshd@8-10.200.20.33:22-10.200.16.10:45264.service - OpenSSH per-connection server daemon (10.200.16.10:45264). Jan 23 23:53:57.293790 sshd[6081]: Accepted publickey for core from 10.200.16.10 port 45264 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:53:57.295752 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:53:57.303035 systemd-logind[1805]: New session 11 of user core. Jan 23 23:53:57.307595 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:53:57.707336 sshd[6081]: pam_unix(sshd:session): session closed for user core Jan 23 23:53:57.758574 systemd-logind[1805]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:53:57.759213 systemd[1]: sshd@8-10.200.20.33:22-10.200.16.10:45264.service: Deactivated successfully. Jan 23 23:53:57.762586 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:53:57.765125 systemd-logind[1805]: Removed session 11. Jan 23 23:53:58.821102 kubelet[3365]: E0123 23:53:58.821041 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:54:00.819137 kubelet[3365]: E0123 23:54:00.819028 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:54:01.817884 kubelet[3365]: E0123 23:54:01.816509 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:54:02.793228 systemd[1]: Started sshd@9-10.200.20.33:22-10.200.16.10:43194.service - OpenSSH per-connection server daemon (10.200.16.10:43194). Jan 23 23:54:02.817561 kubelet[3365]: E0123 23:54:02.817523 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:54:03.294275 sshd[6096]: Accepted publickey for core from 10.200.16.10 port 43194 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:03.297537 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:03.308879 systemd-logind[1805]: New session 12 of user core. Jan 23 23:54:03.313152 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:54:03.814290 sshd[6096]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:03.823521 systemd[1]: sshd@9-10.200.20.33:22-10.200.16.10:43194.service: Deactivated successfully. Jan 23 23:54:03.830170 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:54:03.831830 systemd-logind[1805]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:54:03.835469 systemd-logind[1805]: Removed session 12. Jan 23 23:54:03.918333 systemd[1]: Started sshd@10-10.200.20.33:22-10.200.16.10:43196.service - OpenSSH per-connection server daemon (10.200.16.10:43196). Jan 23 23:54:04.417480 sshd[6120]: Accepted publickey for core from 10.200.16.10 port 43196 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:04.422343 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:04.430614 systemd-logind[1805]: New session 13 of user core. Jan 23 23:54:04.437482 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:54:04.821934 kubelet[3365]: E0123 23:54:04.818016 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:54:04.962690 sshd[6120]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:04.969088 systemd[1]: sshd@10-10.200.20.33:22-10.200.16.10:43196.service: Deactivated successfully. Jan 23 23:54:04.972914 systemd-logind[1805]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:54:04.976136 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:54:04.979035 systemd-logind[1805]: Removed session 13. Jan 23 23:54:05.043165 systemd[1]: Started sshd@11-10.200.20.33:22-10.200.16.10:43202.service - OpenSSH per-connection server daemon (10.200.16.10:43202). Jan 23 23:54:05.490611 sshd[6132]: Accepted publickey for core from 10.200.16.10 port 43202 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:05.492055 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:05.495946 systemd-logind[1805]: New session 14 of user core. Jan 23 23:54:05.501206 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:54:05.911665 sshd[6132]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:05.918186 systemd-logind[1805]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:54:05.918329 systemd[1]: sshd@11-10.200.20.33:22-10.200.16.10:43202.service: Deactivated successfully. Jan 23 23:54:05.921172 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:54:05.923335 systemd-logind[1805]: Removed session 14. Jan 23 23:54:06.820700 kubelet[3365]: E0123 23:54:06.820657 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:54:10.987110 systemd[1]: Started sshd@12-10.200.20.33:22-10.200.16.10:53340.service - OpenSSH per-connection server daemon (10.200.16.10:53340). Jan 23 23:54:11.432878 sshd[6152]: Accepted publickey for core from 10.200.16.10 port 53340 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:11.434535 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:11.442319 systemd-logind[1805]: New session 15 of user core. Jan 23 23:54:11.447212 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:54:11.818891 kubelet[3365]: E0123 23:54:11.816443 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:54:11.862081 sshd[6152]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:11.870485 systemd[1]: sshd@12-10.200.20.33:22-10.200.16.10:53340.service: Deactivated successfully. Jan 23 23:54:11.875744 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:54:11.879977 systemd-logind[1805]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:54:11.880910 systemd-logind[1805]: Removed session 15. Jan 23 23:54:13.819308 kubelet[3365]: E0123 23:54:13.818992 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:54:15.816005 kubelet[3365]: E0123 23:54:15.815948 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:54:16.949218 systemd[1]: Started sshd@13-10.200.20.33:22-10.200.16.10:53352.service - OpenSSH per-connection server daemon (10.200.16.10:53352). Jan 23 23:54:17.447807 sshd[6174]: Accepted publickey for core from 10.200.16.10 port 53352 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:17.450156 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:17.457375 systemd-logind[1805]: New session 16 of user core. Jan 23 23:54:17.461107 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:54:17.816552 kubelet[3365]: E0123 23:54:17.816512 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:54:17.894188 sshd[6174]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:17.900182 systemd[1]: sshd@13-10.200.20.33:22-10.200.16.10:53352.service: Deactivated successfully. Jan 23 23:54:17.904070 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:54:17.904919 systemd-logind[1805]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:54:17.908035 systemd-logind[1805]: Removed session 16. Jan 23 23:54:18.819578 kubelet[3365]: E0123 23:54:18.819536 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:54:19.818883 containerd[1831]: time="2026-01-23T23:54:19.817631745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:54:20.073830 containerd[1831]: time="2026-01-23T23:54:20.073475270Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:20.077324 containerd[1831]: time="2026-01-23T23:54:20.077277039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:54:20.077324 containerd[1831]: time="2026-01-23T23:54:20.077365039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:54:20.077576 kubelet[3365]: E0123 23:54:20.077515 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:54:20.077893 kubelet[3365]: E0123 23:54:20.077584 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:54:20.077893 kubelet[3365]: E0123 23:54:20.077738 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfbdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bfff8d8c9-qd78x_calico-system(4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:20.079672 kubelet[3365]: E0123 23:54:20.079261 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:54:22.975119 systemd[1]: Started sshd@14-10.200.20.33:22-10.200.16.10:34318.service - OpenSSH per-connection server daemon (10.200.16.10:34318). Jan 23 23:54:23.432293 sshd[6213]: Accepted publickey for core from 10.200.16.10 port 34318 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:23.434379 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:23.439325 systemd-logind[1805]: New session 17 of user core. Jan 23 23:54:23.442774 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:54:23.858102 sshd[6213]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:23.862027 systemd-logind[1805]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:54:23.862660 systemd[1]: sshd@14-10.200.20.33:22-10.200.16.10:34318.service: Deactivated successfully. Jan 23 23:54:23.866357 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:54:23.867369 systemd-logind[1805]: Removed session 17. Jan 23 23:54:24.820873 containerd[1831]: time="2026-01-23T23:54:24.820015949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:54:25.113378 containerd[1831]: time="2026-01-23T23:54:25.113008007Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:25.116477 containerd[1831]: time="2026-01-23T23:54:25.116431855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:54:25.116566 containerd[1831]: time="2026-01-23T23:54:25.116533216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:54:25.117007 kubelet[3365]: E0123 23:54:25.116732 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:54:25.117007 kubelet[3365]: E0123 23:54:25.116794 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:54:25.117007 kubelet[3365]: E0123 23:54:25.116940 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlpxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mj4bl_calico-system(e6b6e508-b275-4ee9-aa24-d58a31eb441c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:25.118559 kubelet[3365]: E0123 23:54:25.118288 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:54:26.817628 kubelet[3365]: E0123 23:54:26.817309 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:54:26.820473 containerd[1831]: time="2026-01-23T23:54:26.819954667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:54:27.069271 containerd[1831]: time="2026-01-23T23:54:27.069101455Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:27.072714 containerd[1831]: time="2026-01-23T23:54:27.072613584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:54:27.072714 containerd[1831]: time="2026-01-23T23:54:27.072678544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:54:27.072846 kubelet[3365]: E0123 23:54:27.072808 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:54:27.072906 kubelet[3365]: E0123 23:54:27.072850 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:54:27.073014 kubelet[3365]: E0123 23:54:27.072969 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kltkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:27.076196 containerd[1831]: time="2026-01-23T23:54:27.075989913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:54:27.363230 containerd[1831]: time="2026-01-23T23:54:27.363103476Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:27.368281 containerd[1831]: time="2026-01-23T23:54:27.367926368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:54:27.368281 containerd[1831]: time="2026-01-23T23:54:27.367988248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:54:27.368417 kubelet[3365]: E0123 23:54:27.368145 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:54:27.368417 kubelet[3365]: E0123 23:54:27.368188 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:54:27.368833 kubelet[3365]: E0123 23:54:27.368779 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kltkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zg499_calico-system(d85237ab-62c3-4029-9724-6c41efba9b29): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:27.370022 kubelet[3365]: E0123 23:54:27.369965 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:54:27.537638 waagent[2033]: 2026-01-23T23:54:27.537572Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 23:54:27.547481 waagent[2033]: 2026-01-23T23:54:27.546720Z INFO ExtHandler Jan 23 23:54:27.547481 waagent[2033]: 2026-01-23T23:54:27.546848Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f3cdb1f6-badc-48dd-a41f-fb4e57b03907 eTag: 14030506351890729607 source: Fabric] Jan 23 23:54:27.547481 waagent[2033]: 2026-01-23T23:54:27.547228Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 23:54:27.548324 waagent[2033]: 2026-01-23T23:54:27.547881Z INFO ExtHandler Jan 23 23:54:27.548324 waagent[2033]: 2026-01-23T23:54:27.548003Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 23:54:27.615843 waagent[2033]: 2026-01-23T23:54:27.615720Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 23:54:27.704937 waagent[2033]: 2026-01-23T23:54:27.704814Z INFO ExtHandler Downloaded certificate {'thumbprint': '9FA7D53A58DBE7B6EEEB575CBE2EDEC6CA375504', 'hasPrivateKey': True} Jan 23 23:54:27.706272 waagent[2033]: 2026-01-23T23:54:27.705411Z INFO ExtHandler Fetch goal state completed Jan 23 23:54:27.706272 waagent[2033]: 2026-01-23T23:54:27.705769Z INFO ExtHandler ExtHandler Jan 23 23:54:27.706272 waagent[2033]: 2026-01-23T23:54:27.705836Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 3abba5f6-ca44-4dd5-8ef0-42e43ad837ad correlation e9f68aa3-434b-45f0-8305-1a6968adb595 created: 2026-01-23T23:54:21.977295Z] Jan 23 23:54:27.706272 waagent[2033]: 2026-01-23T23:54:27.706164Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 23:54:27.706731 waagent[2033]: 2026-01-23T23:54:27.706686Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 23:54:28.944113 systemd[1]: Started sshd@15-10.200.20.33:22-10.200.16.10:34328.service - OpenSSH per-connection server daemon (10.200.16.10:34328). Jan 23 23:54:29.448879 sshd[6253]: Accepted publickey for core from 10.200.16.10 port 34328 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:29.452319 sshd[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:29.459575 systemd-logind[1805]: New session 18 of user core. Jan 23 23:54:29.465134 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:54:29.817377 containerd[1831]: time="2026-01-23T23:54:29.817337733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:54:29.926799 sshd[6253]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:29.932424 systemd[1]: sshd@15-10.200.20.33:22-10.200.16.10:34328.service: Deactivated successfully. Jan 23 23:54:29.936549 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:54:29.937042 systemd-logind[1805]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:54:29.939481 systemd-logind[1805]: Removed session 18. Jan 23 23:54:30.015120 systemd[1]: Started sshd@16-10.200.20.33:22-10.200.16.10:36464.service - OpenSSH per-connection server daemon (10.200.16.10:36464). Jan 23 23:54:30.067736 containerd[1831]: time="2026-01-23T23:54:30.067573698Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:30.071241 containerd[1831]: time="2026-01-23T23:54:30.071152107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:54:30.071241 containerd[1831]: time="2026-01-23T23:54:30.071210947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:54:30.071424 kubelet[3365]: E0123 23:54:30.071378 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:54:30.071707 kubelet[3365]: E0123 23:54:30.071435 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:54:30.071707 kubelet[3365]: E0123 23:54:30.071536 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fc17560a7d374cc8a5379ccc150c7fc7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hl6hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bffcb8bdf-jghmp_calico-system(020e3cf1-f5a2-4e03-b3f1-21fc39350338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:30.074951 containerd[1831]: time="2026-01-23T23:54:30.074898716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:54:30.362745 containerd[1831]: time="2026-01-23T23:54:30.362492217Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:30.366742 containerd[1831]: time="2026-01-23T23:54:30.366460067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:54:30.366742 containerd[1831]: time="2026-01-23T23:54:30.366567067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:54:30.367915 kubelet[3365]: E0123 23:54:30.366719 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:54:30.367915 kubelet[3365]: E0123 23:54:30.366909 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:54:30.367915 kubelet[3365]: E0123 23:54:30.367019 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hl6hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bffcb8bdf-jghmp_calico-system(020e3cf1-f5a2-4e03-b3f1-21fc39350338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:30.368476 kubelet[3365]: E0123 23:54:30.368440 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:54:30.512044 sshd[6266]: Accepted publickey for core from 10.200.16.10 port 36464 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:30.513483 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:30.518580 systemd-logind[1805]: New session 19 of user core. Jan 23 23:54:30.527408 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:54:30.821815 kubelet[3365]: E0123 23:54:30.819132 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:54:31.143167 sshd[6266]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:31.150317 systemd[1]: sshd@16-10.200.20.33:22-10.200.16.10:36464.service: Deactivated successfully. Jan 23 23:54:31.156345 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:54:31.157756 systemd-logind[1805]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:54:31.160315 systemd-logind[1805]: Removed session 19. Jan 23 23:54:31.229552 systemd[1]: Started sshd@17-10.200.20.33:22-10.200.16.10:36478.service - OpenSSH per-connection server daemon (10.200.16.10:36478). Jan 23 23:54:31.729923 sshd[6278]: Accepted publickey for core from 10.200.16.10 port 36478 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:31.731844 sshd[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:31.736421 systemd-logind[1805]: New session 20 of user core. Jan 23 23:54:31.739118 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:54:32.823013 containerd[1831]: time="2026-01-23T23:54:32.821572628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:54:32.963346 sshd[6278]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:32.970064 systemd-logind[1805]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:54:32.971171 systemd[1]: sshd@17-10.200.20.33:22-10.200.16.10:36478.service: Deactivated successfully. Jan 23 23:54:32.972696 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:54:32.981259 systemd-logind[1805]: Removed session 20. Jan 23 23:54:33.050380 systemd[1]: Started sshd@18-10.200.20.33:22-10.200.16.10:36488.service - OpenSSH per-connection server daemon (10.200.16.10:36488). Jan 23 23:54:33.081656 containerd[1831]: time="2026-01-23T23:54:33.080469774Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:33.085566 containerd[1831]: time="2026-01-23T23:54:33.085204066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:54:33.085566 containerd[1831]: time="2026-01-23T23:54:33.085304507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:54:33.085683 kubelet[3365]: E0123 23:54:33.085446 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:54:33.085683 kubelet[3365]: E0123 23:54:33.085519 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:54:33.087818 kubelet[3365]: E0123 23:54:33.086067 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nffqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8fbd4d54-2csvx_calico-apiserver(e432e42b-559a-473b-8e55-fe59b8af82e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:33.087818 kubelet[3365]: E0123 23:54:33.087217 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:54:33.541903 sshd[6303]: Accepted publickey for core from 10.200.16.10 port 36488 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:33.543581 sshd[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:33.547375 systemd-logind[1805]: New session 21 of user core. Jan 23 23:54:33.554560 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:54:34.275433 sshd[6303]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:34.284100 systemd[1]: sshd@18-10.200.20.33:22-10.200.16.10:36488.service: Deactivated successfully. Jan 23 23:54:34.286723 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:54:34.288747 systemd-logind[1805]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:54:34.289995 systemd-logind[1805]: Removed session 21. Jan 23 23:54:34.364688 systemd[1]: Started sshd@19-10.200.20.33:22-10.200.16.10:36502.service - OpenSSH per-connection server daemon (10.200.16.10:36502). Jan 23 23:54:34.875607 sshd[6315]: Accepted publickey for core from 10.200.16.10 port 36502 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:34.877431 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:34.885460 systemd-logind[1805]: New session 22 of user core. Jan 23 23:54:34.893175 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:54:35.307602 sshd[6315]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:35.314546 systemd[1]: sshd@19-10.200.20.33:22-10.200.16.10:36502.service: Deactivated successfully. Jan 23 23:54:35.323765 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:54:35.327556 systemd-logind[1805]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:54:35.330966 systemd-logind[1805]: Removed session 22. Jan 23 23:54:38.818291 kubelet[3365]: E0123 23:54:38.818245 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:54:39.819656 containerd[1831]: time="2026-01-23T23:54:39.819427167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:54:40.088263 containerd[1831]: time="2026-01-23T23:54:40.087805146Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:54:40.091394 containerd[1831]: time="2026-01-23T23:54:40.091297955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:54:40.091394 containerd[1831]: time="2026-01-23T23:54:40.091347795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:54:40.091553 kubelet[3365]: E0123 23:54:40.091509 3365 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:54:40.091869 kubelet[3365]: E0123 23:54:40.091558 3365 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:54:40.091869 kubelet[3365]: E0123 23:54:40.091714 3365 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgwjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8fbd4d54-j9z4b_calico-apiserver(86223f84-2792-4d18-8124-56ab2f35f54f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:54:40.093183 kubelet[3365]: E0123 23:54:40.093149 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:54:40.393101 systemd[1]: Started sshd@20-10.200.20.33:22-10.200.16.10:41966.service - OpenSSH per-connection server daemon (10.200.16.10:41966). Jan 23 23:54:40.886843 sshd[6331]: Accepted publickey for core from 10.200.16.10 port 41966 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:40.888213 sshd[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:40.892508 systemd-logind[1805]: New session 23 of user core. Jan 23 23:54:40.897157 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:54:41.302098 sshd[6331]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:41.304665 systemd[1]: sshd@20-10.200.20.33:22-10.200.16.10:41966.service: Deactivated successfully. Jan 23 23:54:41.308968 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:54:41.310360 systemd-logind[1805]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:54:41.311542 systemd-logind[1805]: Removed session 23. Jan 23 23:54:41.819783 kubelet[3365]: E0123 23:54:41.819717 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:54:43.818160 kubelet[3365]: E0123 23:54:43.817311 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:54:43.818160 kubelet[3365]: E0123 23:54:43.818021 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:54:46.387248 systemd[1]: Started sshd@21-10.200.20.33:22-10.200.16.10:41968.service - OpenSSH per-connection server daemon (10.200.16.10:41968). Jan 23 23:54:46.877917 sshd[6347]: Accepted publickey for core from 10.200.16.10 port 41968 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:46.879387 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:46.885814 systemd-logind[1805]: New session 24 of user core. Jan 23 23:54:46.890123 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:54:47.321163 sshd[6347]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:47.326278 systemd[1]: sshd@21-10.200.20.33:22-10.200.16.10:41968.service: Deactivated successfully. Jan 23 23:54:47.334206 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:54:47.335933 systemd-logind[1805]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:54:47.337140 systemd-logind[1805]: Removed session 24. Jan 23 23:54:47.819412 kubelet[3365]: E0123 23:54:47.817081 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:54:50.818151 kubelet[3365]: E0123 23:54:50.817677 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:54:52.409902 systemd[1]: Started sshd@22-10.200.20.33:22-10.200.16.10:35828.service - OpenSSH per-connection server daemon (10.200.16.10:35828). Jan 23 23:54:52.894323 sshd[6382]: Accepted publickey for core from 10.200.16.10 port 35828 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:52.896746 sshd[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:52.903045 systemd-logind[1805]: New session 25 of user core. Jan 23 23:54:52.910105 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:54:53.366904 sshd[6382]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:53.370751 systemd[1]: sshd@22-10.200.20.33:22-10.200.16.10:35828.service: Deactivated successfully. Jan 23 23:54:53.371023 systemd-logind[1805]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:54:53.379391 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:54:53.380594 systemd-logind[1805]: Removed session 25. Jan 23 23:54:54.820541 kubelet[3365]: E0123 23:54:54.819153 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29" Jan 23 23:54:54.821649 kubelet[3365]: E0123 23:54:54.821440 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-j9z4b" podUID="86223f84-2792-4d18-8124-56ab2f35f54f" Jan 23 23:54:54.821649 kubelet[3365]: E0123 23:54:54.821549 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bffcb8bdf-jghmp" podUID="020e3cf1-f5a2-4e03-b3f1-21fc39350338" Jan 23 23:54:57.817990 kubelet[3365]: E0123 23:54:57.817650 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bfff8d8c9-qd78x" podUID="4b3f4ecd-6a6d-421d-82f3-6ec813fdbd24" Jan 23 23:54:58.449036 systemd[1]: Started sshd@23-10.200.20.33:22-10.200.16.10:35842.service - OpenSSH per-connection server daemon (10.200.16.10:35842). Jan 23 23:54:58.909796 sshd[6397]: Accepted publickey for core from 10.200.16.10 port 35842 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:54:58.913724 sshd[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:58.921182 systemd-logind[1805]: New session 26 of user core. Jan 23 23:54:58.926156 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 23:54:59.310302 sshd[6397]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:59.316206 systemd[1]: sshd@23-10.200.20.33:22-10.200.16.10:35842.service: Deactivated successfully. Jan 23 23:54:59.319118 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 23:54:59.321307 systemd-logind[1805]: Session 26 logged out. Waiting for processes to exit. Jan 23 23:54:59.323092 systemd-logind[1805]: Removed session 26. Jan 23 23:55:01.815887 kubelet[3365]: E0123 23:55:01.815538 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mj4bl" podUID="e6b6e508-b275-4ee9-aa24-d58a31eb441c" Jan 23 23:55:02.822552 kubelet[3365]: E0123 23:55:02.822511 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8fbd4d54-2csvx" podUID="e432e42b-559a-473b-8e55-fe59b8af82e5" Jan 23 23:55:04.392392 systemd[1]: Started sshd@24-10.200.20.33:22-10.200.16.10:47366.service - OpenSSH per-connection server daemon (10.200.16.10:47366). Jan 23 23:55:04.864382 sshd[6410]: Accepted publickey for core from 10.200.16.10 port 47366 ssh2: RSA SHA256:zqrJ24ORRlM2cz3Sa5JvoDP+owi9sASZGUjBOtO/AIg Jan 23 23:55:04.867490 sshd[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:04.875808 systemd-logind[1805]: New session 27 of user core. Jan 23 23:55:04.880122 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 23:55:05.259137 sshd[6410]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:05.265188 systemd[1]: sshd@24-10.200.20.33:22-10.200.16.10:47366.service: Deactivated successfully. Jan 23 23:55:05.270357 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 23:55:05.271486 systemd-logind[1805]: Session 27 logged out. Waiting for processes to exit. Jan 23 23:55:05.272822 systemd-logind[1805]: Removed session 27. Jan 23 23:55:06.821099 kubelet[3365]: E0123 23:55:06.821025 3365 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zg499" podUID="d85237ab-62c3-4029-9724-6c41efba9b29"